Top Banner
Interaction Studies 13:2 (2012), . doi 10.1075/is.13.2.04sin issn 15720373 / e-issn 15720381 © John Benjamins Publishing Company If it looks like a dog e effect of physical appearance on human interaction with robots and animals Anne M. Sinatra 1 , Valerie K. Sims 1 , Matthew G. Chin 1 & Heather C. Lum 2 1 University of Central Florida / 2 Penn State Erie, e Behrend College, USA is study was designed to compare the natural free form communication that takes place when a person interacts with robotic entities versus live animals. One hundred and eleven participants interacted with one of four entities: an AIBO robotic dog, Legobot, Dog or Cat. It was found that participants tended to rate the Dog as more capable than the other entities, and oſten spoke to it more than the robotic entities. However, participants were not positively biased toward live entities, as the Cat oſten was thought of and spoken to similarly to the AIBO robot. Results are consistent with a model in which both appearance and interactivity lead to the development of beliefs about a live or robotic entity in an interaction. Keywords: Human-robot interaction; human-animal interaction; AIBO; free form communication; attributions; human-entity interaction “If it looks like a duck, quacks like a duck and swims like a duck, then it must be a duck”, is a saying with which many people are familiar. e robotic dog AIBO was designed to look like a dog, walk like a dog, and act like a dog. While the design and actions approximate a dog, there are still elements of AIBO that are robot- like (e.g. it has no fur, and has robotic movements). e question that is naturally raised as a result of this is: In spite of these limitations in appearance and behavior, will people treat AIBO as if it is a real dog? To begin to address this question, researchers have sought to understand whether and how people make distinctions between robots and their living counterparts. By using an AIBO, researchers have the opportunity to compare a robotic dog to its real counterpart. rough these comparisons, it can be determined if people make distinctions between the robot and live entity in regards to behavioral interactions and attributions. is paper will further the field of human-robot interaction by observing how humans interact with and distin- guish live and robotic entities.
29

If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

May 15, 2023

Download

Documents

Warren Waren
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Interaction Studies 13:2 (2012), –. doi 10.1075/is.13.2.04sinissn 1572–0373 / e-issn 1572–0381 © John Benjamins Publishing Company

If it looks like a dog

The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra1, Valerie K. Sims1, Matthew G. Chin1 & Heather C. Lum2

1University of Central Florida / 2Penn State Erie, The Behrend College, USA

This study was designed to compare the natural free form communication that takes place when a person interacts with robotic entities versus live animals. One hundred and eleven participants interacted with one of four entities: an AIBO robotic dog, Legobot, Dog or Cat. It was found that participants tended to rate the Dog as more capable than the other entities, and often spoke to it more than the robotic entities. However, participants were not positively biased toward live entities, as the Cat often was thought of and spoken to similarly to the AIBO robot. Results are consistent with a model in which both appearance and interactivity lead to the development of beliefs about a live or robotic entity in an interaction.

Keywords: Human-robot interaction; human-animal interaction; AIBO; free form communication; attributions; human-entity interaction

“If it looks like a duck, quacks like a duck and swims like a duck, then it must be a duck”, is a saying with which many people are familiar. The robotic dog AIBO was designed to look like a dog, walk like a dog, and act like a dog. While the design and actions approximate a dog, there are still elements of AIBO that are robot-like (e.g. it has no fur, and has robotic movements). The question that is naturally raised as a result of this is: In spite of these limitations in appearance and behavior, will people treat AIBO as if it is a real dog?

To begin to address this question, researchers have sought to understand whether and how people make distinctions between robots and their living c ounterparts. By using an AIBO, researchers have the opportunity to compare a robotic dog to its real counterpart. Through these comparisons, it can be determined if people make distinctions between the robot and live entity in regards to behavioral interactions and attributions. This paper will further the field of human-robot interaction by observing how humans interact with and distin-guish live and robotic entities.

Page 2: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

Human-robot interactions have been empirically studied for many years as robotic technology has begun to flourish. Recently robots have begun to be used for purposes outside of industry and manufacturing of products. Indeed, we have seen robots becoming social beings developed to be our companions (e.g. AIBO), our help, our entertainment, and it is that direct contact with us, which has allowed research in this area to be ever present. This has implications for various fields of research including but not limited to robotics, engineering, psychology, and com-puter science (Fong, Nourbakhsh & Dautenhahn 2003).

The sociability of robots is a more recent phenomenon, and it is becoming an increasingly important component that robots may need in order to interact in a human world. Even such terms as “robotiquette” have been developed and studied in order to instill more effective human-robot interactions (Ogden & Dautenhahn 2000). Research has also begun to look at case studies of how people of all ages interact with and feel about robot companions and co-workers (Turkle, Taggart, Kidd & Daste 2006; Hinds, Roberts & Jones 2004). Further, an understanding of a person’s mental model of a robot’s capabilities can lend insight into his or her communication with it (Lee, Yee-man, Kiesler & Chiu 2005). As the uses of robots become increasingly social in nature, including those used as therapeutic agents, social mediators, and even model social agents (Dautenhahn 2003), it is important that we study how these social robots may be perceived by the humans they are meant to interact with. Dogs are also considered social companions, therefore, a comparison of interactions between live and robotic versions of these companions can lend insight into human-robot interaction.

Robotic pets can provide companionship to those who cannot take care of a live entity. For instance, a robotic dog may be an appropriate companion for an elderly dementia patient. Tamura et al. (2004) found that interacting with an AIBO increased the amount of activity of elderly dementia patients during their occupa-tional therapy. An interaction with a social robot companion also has been shown to improve stress level and provide therapeutic benefit to the elderly (Wada & Shibata 2008). In addition, simple, non-human looking robots that provide fewer social cues than a human appear to be more engaging for autistic children (Robins, Dautenhahn & Dubowski 2006). Health benefits have even been linked between non-anthropomorphic robots and the general population. In a preliminary study by Bethel, Salomon, and Murphy (2009), they found that participants who interacted with an emotive robot that did not resemble a human or animal, were calmer than those who interacted with the same robot in a non-emotive mode. Proper interactions with robots can provide a calming and positive environment for humans in a number of different domains.

The behaviors that children display when interacting with live and robotic animals is an area of research that has recently been examined. One such study

Page 3: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

observed the spontaneous interaction behavior when children and adults inter-acted with either an AIBO or a real dog. There was no difference in amount of time before first tactile interaction with the entity or length of time spent touch-ing the real dog versus the AIBO (Kerepesi, Kubinyi, Jonsson, Magnusson & Miklosi 2006). However, Melson et al. (2009) found that children spent more time touching a real dog as compared to an AIBO. In addition, another study focusing on children’s beliefs about robots revealed that they do indeed make a distinction between live and robotic entities (Bernstein & Crowley 2008). After interacting with eight separate entities, two of which were robotic, the children made unique classifications for the humanoid robot and non-humanoid rover robot along the biological and intelligence characteristic spectrum. This was also true for the psy-chological characteristics attributed to the robotic entities when compared with people, a cat, a computer, and a doll. In this instance, the robotic entities were considered to have more psychological characteristics attributed to them than the computer or doll, but far less than the cat or people. Children in this study did make unique distinctions between live, robotic, and other entity types on behav-ioral and psychological markers.

Another study examined the beliefs that pre-school age children had of AIBO and a stuffed animal dog. When these children interacted with either entity their beliefs about the “animal” and the ways that they behaved with it were not consistent. In this study, the same proportion of pre-school children attributed mental states, social rapport, and moral standing to both AIBO and the stuffed animal. From the behavioral interaction it is suggested that pre-school children treated AIBO as though it were more capable of making its own decisions than the stuffed dog (Kahn Jr., Friedman, Perez-Granados & Freier 2006). Another study found that when children were put in an interaction situation they were just as likely to talk to AIBO as they were to talk to a real dog. Despite this, the children conceptualized the real dog as having more physical essences, mental states, sociality and moral standing than AIBO (Melson et al. 2009). This sug-gests that while people may treat an AIBO and real dog similarly, they may have different beliefs or attributions about them. In a study by Goff, Sims, and Chin (2005) normally developing pre-school children interacted with either a real dog or an AIBO. Children primarily interacted with both the organic and robotic dog through touch, and directed very little spontaneous speech toward the entities.

Sex of the participant may also have an effect on beliefs about robots. In a study by Bartneck and Hu (2008) examining robot abuse, it was observed that males and females used different strategies in destroying robots. On further investigation, it was discovered that females tended to perceive the robots to be more intelligent than did the males. This suggests that the variable of sex may impact the attributions that a person gives robotic entities.

Page 4: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

Differences in attributions toward robotic and live dogs have also been shown to exist when one is working with an entity as a teammate (Pepe, Upham Ellis, Sims & Chin 2008). In this 2008 study, participants were told that they were either working with a semi-autonomous AIBO or a live dog in navigating a maze based task. The entity was briefly shown to the participant, and then taken out of the room. The participant then viewed what they believed to be their partners’ maze progress on a simulated computer screen map. The behavior of both entities was shown via a pre-recorded path which was identical. There were no differences in the self-reported mood ratings between those participants who interacted with the live dog, or the AIBO. While content of the speech participants used with the dog and AIBO was not significantly different, it was found that a significantly higher pitch was used when speaking to the dog. It also was found that participants were more likely to rate the real dog as being more cooperative, responsive, trustworthy, affectionate, obedient, and realistic than the AIBO. Further, the AIBO was rated as being significantly more stupid, unfriendly, sophisticated, and threatening than the real dog. These findings suggest that while outward appearances may indicate that the AIBO and dog are being treated in the same way, different internal beliefs may exist. This research suggests that just because a robot looks like a dog, it is not necessarily treated or thought of as one.

In previous work it has been identified that different attributions and beliefs may be held about live dogs and robotic dogs. At this point it is unclear whether these differences are due to status as a robot, or due to visual appearance. In order to determine this, our study was designed to include another live entity, a cat, and an additional robot. The additional robot included in our study, a Lego Mindstorms robot (which will be referred to as Legobot), was selected since it does not resemble either of the live entities that are being interacted with. These extra comparison conditions can help determine whether the resemblance to a live entity or actually being alive impacts the beliefs one holds about an entity.

In studies such as Pepe et al. (2008), many of the situations in which communication is elicited are part of a procedure that is to be followed, or work that is to be achieved in a partnership. In most cases, participants are instructed on the specific commands and words they must use when engaging in the inter-action. The current study was designed to examine the amount, and type of communication that is elicited in a less directed situation. Investigation of natural language and behaviors can provide information as to how humans may respond to a novel robot, as well as how we can create better interfaces for linguistic control over unmanned entities.

Sims and Chin (2002) developed a methodology of natural interaction in which participants interacted with a live cat. Their study showed that the fre-quency with which a person talks to a cat appears to be related to how intelligent

Page 5: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

he or she believes the cat to be. The current study uses a similar methodology to elicit free form communication between the participants and the entity.

Further, the present study examines the attributions, thoughts and beliefs that participants have about four entities: a Dog, a Cat, an AIBO and a Legobot. By including measures of mood, speech, beliefs and interactive behavior this study will paint a more complete picture of the spontaneous interaction that exists when a human interacts with animals and robots. There are two competing theories that are being examined in this study: (1) Participants will act similarly toward entities that look similar (e.g. AIBO & Dog) or (2) Participants will make distinctions in their beliefs about the behavior of the entities, regardless of their appearance.

1. Method

1.1 Participants

There were 111 undergraduates recruited at a large Southeastern state university (75 females and 36 males). Ages ranged from 18 to 26 (M = 18.76, SD = 1.38). Participants received extra credit in their courses for their participation. The study procedure was administered to participants individually. Participants were assigned upon arrival to one of four conditions: AIBO (N = 28), Legobot (N = 27), Dog (N = 26) or Cat (N = 30). There was approximately the same ratio of males to females that participated in each condition: AIBO (9 males, 19 females), Legobot (10 males, 17 females), Dog (8 males, 18 females), and Cat (9 males, 21 females).

1. Entities

The entities were referred to by the experimenters as Dog, Cat, and Robot (in the case of the AIBO and Legobot). See Figures 1 and 2 for pictures of the entities that participants interacted with in the current study.

A black Sony AIBO ERS-7 robotic dog was used. AIBO is a fully autonomous robot that resembles and is the size of a small dog. This particular model of AIBO was selected for use, as it more closely resembles a live dog than previous models. It has four legs, floppy ears, a tail, and a front display to represent eyes. It can “see” with its cameras, “hear” with its microphones, walk, as well as respond to participants both by producing sounds and via touch sensors on its body. AIBO can generally interact as a real animal would. For the purposes of the current experiment, AIBO was kept in the “puppy stage” of development, which shut off its more elaborate and non-dog-like features such as “dancing.”

A Lego Mindstorms robot kit was used to construct the non-dog robot ( Legobot). The robot was modeled after a pictured example provided with

Page 6: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

the purchased kit. Its vision sensors were set on their side to reduce the interpreta-tion of them as eyes. The robot had the ability to produce beeping sounds through a small speaker. The robot included a claw on the front of it, to give the impression that it could grab items. It had wheels and was programmed to move randomly and stop for 5 seconds in response to any sound.

The live entities included a dog and cat. The dog was approximately 3 years of age and was a female tan Yorkshire Terrier-Poodle mix. The cat was a 5 year old domestic short-haired female. Both live entities were chosen due to their size

Figure 1. The AIBO (left) and the Dog (right) that participants interacted with in the current study

Figure 2. The Legobot (left) and the Cat (right) that participants interacted with in the current study

Page 7: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities 1

(approximately the same size as AIBO) and their sociability with humans. The live entities were monitored at all times by a remote camera system to ensure both their and the participants’ safety. The dog and cat both had access to food, water, and bathroom breaks between participant sessions. Experimenters limited the ses-sions with the dog and cat to less than 4 hours a day and a professional animal trainer was on hand to ensure that the animals were not stressed during the inter-action with the participants.

1. Procedure and materials

The procedure involved a 5-minute interaction period with the entity. Following the informed consent process, participants were asked to rate their mood by mak-ing a mark on a 100 cm line. They were instructed that the far right of the scale was considered a very good mood, and the far left was considered a very bad mood. This was considered Time 1 for Mood Ratings. This method of mood ratings was used as it was quick and not disruptive to the procedure or interactions.

Participants were brought into a small room where they were instructed to stand inside a 6 foot by 6 foot area marked off with white tape on the floor. The room was divided into 2 foot by 2 foot squares by colored floor padding. Partici-pants were then asked to rate their Mood a second time using the same 100cm scale.

Figure 3 provides a schematic of the testing room. Next, they were told the type of entity with which they would be interacting. Participants were instructed

Participantstation

Participantline

Experimenterstation

Com

puter &m

onitoring

Hallw

ay

Figure 3. Set up for the experiment

Page 8: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

to stay within the white tape during the entire interaction with the entity, and that the entity could come into their area, but they could not go into the entity’s area. They also were instructed not to touch the entity, and not to throw anything toward the entity. After asking the participant if he or she had any questions, the experimenter left and returned with the entity and placed it on the center square of the room, approximately 4 feet from the participant. Figure 4 provides a timeline of the events of the procedure.

Pre-experimentinformation

Informedconsent& Time 1mood rating

Instructions,& Time 2mood rating

Unstructuredinteraction& Time 3mood rating

Structuredinteraction(No Toys)& Time 4mood rating

Structuredinteraction(No Toys)& Time 5mood rating

Structuredinteraction(Toys)& Time 6mood rating

Structuredinteraction(Toys) &Time 7mood rating

Attribution &capabilitiesquestionnaires

Debriefingform &answerquestions

Experimentalsetup

Minute 1 Minute 2 Minute 3 Minute 4 Minute 5 Post-experimentsurveys

Debriefing

Figure 4. Timeline of the experiment

Video and audio recording. Video and audio recordings were made of each participant’s interaction with the entity. Four separate overhead security cameras were used to allow multiple viewing angles of the entire room in which the inter-action took place. The participant wore a wireless headset microphone to ensure that what he or she said was captured. The main experimenter who worked with the participant was not present in the room with the participant during the inter-action period. Therefore, an additional experimenter monitored a live video feed of the interaction for safety purposes.

Minute 1. After explaining the initial instructions the experimenter announced that he or she had one more thing to do, and would be right back. The participant did not have any specific instructions for this minute. Upon returning, the experi-menter took another mood rating (Time 3).

Minutes 2 & 3. The participants were told that their task was to find out what the entity was capable of doing. After providing the participants an oppor-tunity to ask any questions that they had about the task, the experimenter then left the room. The experimenter wore a digital watch with a timer that was set as they closed the door to ensure they would know when each minute was com-plete. After each minute of this interaction the experimenter opened the door to the room and handed the participant the clipboard to rate his or her mood (Times 4 & 5).

Minutes 4 & 5. The experimenter provided a box, which was put down on a taped square to ensure it was in the same location for all participants. The box included five objects: a rope bone, rattle, book, string toy, and stuffed toy. None of

Page 9: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

the items were of the solid pink color to which AIBO’s sensors are sensitive, so as not to give AIBO or any entity any object that it would be particularly responsive to over others. The participants were told that they could now use the objects in the box to assist in determining the entity’s capabilities. The experimenter returned after each minute of this interaction to take a mood rating (Times 6 & 7). Follow-ing the interaction period, the participant was administered a number of different questionnaires on a computer in a separate room.

Attribution and capabilities questionnaires. The attribution and capabilities questionnaires were used to rate the participants’ agreement with statements of “The entity I interacted with was ________”, and “I believe the entity was capable of _______” on a 7-point likert scale (1 being high, 7 being low). The list of attri-butions (e.g. friendly, threatening) and capabilities (e.g. capable of recognizing human faces, thought) were selected by adapting a list of attributions that partici-pants were asked to rate entities on in a previous experiment (Pepe et al. 2008). An additional three questions were included in which participants rated the entity on a 5-point scale ranging between the terms: Uncooperative-Cooperative, Easy-Difficult, and Responsive-Unresponsive.

Debriefing. After completing the surveys, the participant was given a debrief-ing form. The experimenter explained what the goals of the study were, and allowed the participant to ask any questions he or she had.

1. Coding

To ensure accuracy, two separate audio transcriptions of the interaction were done independently, and coded in their entirety by two different coders. The transcriptions were coded for both amount spoken and content of what was spo-ken by the participants. Inter-rater reliability was calculated by correlating the responses of the two coders and was found to be extremely high (.97 ≤ r ≤ .99). The two ratings were averaged together and the composite average rating was used for analyses. Video recordings were independently coded by 2 coders to assess the number of objects used by participants. Inter-rater reliability was high (.81 ≤ r ≤ .92), so the ratings were averaged and used for analysis.

. Results

Due to the large number of results, tables which have non-significant tests have been included, but only significant results will be discussed in this results section. Tables will include all means, standard deviations, ANOVA tests and partial eta squared measures regarding the specific analysis mentioned.

Page 10: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

.1 Speech

A 4 (Entity Type) × 2 (Participant Sex) × 5 (Minute of Interaction) Mixed ANOVA was run in order to examine each of the different types of speech attributions. In order to adjust for multiple comparisons, a Bonferonni adjustment was used for the within subjects variables, and a Bonferonni post-hoc test was used for between subjects variables.

Entity. There was a main effect for the mean number of words spoken to the entities (Word Count), F(3,103) = 5.712, p = .001, ηp² = .143. The mean number of words spoken to the Dog was significantly higher than those spoken to the Lego-bot, p < .001. The other entities were not significantly different from each other. See Table 1 for data regarding each of the 7 types of speech attributions for each entity.

Average word length was calculated by dividing the Number of Characters by the Number of Words Spoken. The average word length for the Legobot was found to be significantly less than the average word length for all other entities (all p < .001).

The Dog was asked significantly more questions and given significantly more commands than the AIBO (p = .021 for both), and Legobot (p < .001 for both). The Dog was given significantly more feedback than the AIBO, Legobot and Cat (all p < .001). The Cat was called by significantly more diminutive names than AIBO (p < .001), Legobot (p < .001) and Dog (p = .046).

Table 1. Speech attributions for each entity (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Word Count

20.06 (14.101)

11.79 (10.799)

9.84 (18.322)

2.84 (7.541)

F(3,103) = 5.712, p = .001

ηp² = .143

Avg. Word Length

2.86 (.959)

2.44 (1.369)

2.14 (1.134)

.76 (1.123)

F(3,103) = 14.052, p < .001

ηp² = .290

Questions 1.16 (1.079)

.57 (.699)

.49 (1.018)

.05 (.197)

F(3,103) = 6.318, p = .001

ηp² = .155

Commands 5.24 (4.103)

2.75 (3.853)

2.36 (3.007)

1.03 (3.078)

F(3,103) = 4.856, p = .003

ηp² = .124

Feedback .52 (.657)

.01 (.037)

.05 (.160)

.05 (.197)

F(3,103) = 8.735, p < .001

ηp² = .203

Entity Name

.07 (.178)

.09 (.261)

.13 (.459)

.01 (.038)

F(3,103) = 1.973, p = .123

ηp² = .054

Diminutive Name

.33 (.416)

.74 (.951)

.13 (.289)

.00 (.000)

F(3,103) = 12.347, p < .001

ηp² = .186

Page 11: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

Sex. There was a main effect of Sex in regard to amount of times the entity name was used, F(1,103) = 4.614, p = .034, ηp² = .043, such that males used the entity name (M = .155, SD = .456) significantly more than females did (M = .037, SD = .121).

Time. For all the within subjects ANOVAs of Time, the assumption of sphericity was violated according to Mauchly’s Test of Sphericity. As a result of this the degrees of freedom of each test was corrected using Greenhouse-Geisser estimates of sphericity. See Table 2 for the tests of sphericity and Greenhouse-Geisser epsilon statistics.

Table 2. Mauchly’s test of sphericity for speech attributes

Significance test Greenhouse-Geisser epsilon

Word Count X²(9) = 36.16, p < .001 ε = .85Avg. Word Length X²(9) = 16.86, p = .051 ε = .93Questions X²(9) = 39.05, p < .001 ε = .82Commands X²(9) = 44.77, p < .001 ε = .80Feedback X²(9) = 44.77, p < .001 ε = .80Entity Name X²(9) = 166.98, p < .001 ε = .59Diminutive Name X²(9) = 99.26, p < .001 ε = .70

There was a main effect of Time for Word Count, F(3.415, 351.697) = 21.718, p < .001, ηp² = .174. Significantly less was spoken in Minute 1 than in any of the other minutes (all p < .001). In addition, significantly more was spoken in Minute 2 than in Minute 3 (p = .004) and Minute 4 (p = .037). See Table 3 for data regarding each of the 7 types of speech attributions for each minute.

Table 3. Speech attributions by minute (means, standard deviations)

Minute 1 Minute 2 Minute 3 Minute 4 Minute 5

Word Count 2.98(6.816) 16.14(20.743) 11.55(16.063) 12.08(18.837) 12.55(19.841)Avg. Word Length 1.21(1.715) 2.61(1.809) 2.41(1.892) 1.98(1.849) 2.05(1.807)Questions .15(.471) .77(1.380) .57(1.305) .53(1.086) .77(1.605)Commands .33(1.216) 4.87(6.266) 3.19(4.393) 3.14(5.661) 2.55(4.771)Feedback .02(.134) .18(.591) .08(.360) .23(.809) .23(.842)Entity Name .05(.353) .14(.495) .09(.394) .03(.211) .07(.349)Diminutive Name .16(.478) .63(1.211) .32(.809) .23(.842) .21(.620)

Page 12: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

A Time × Entity interaction was found for Word Count, F(10.244, 351.697) = 3.267, p < .001, ηp² = .087. A series of One Way ANOVAs was run to examine the interaction, these ANOVAs and descriptive statistics are reported in Table 4. In Minute 1, while the entities were determined to be different by the One Way ANOVA, Bonferonni post-hocs did not indicate any significant differences between entities. For Minute 2, the Legobot was spoken to significantly less than the Dog (p < .001) and the Cat (p = .020). In Minute 3, the Dog was spoken to significantly more than the AIBO (p = .028) and Legobot (p < .001). In Minutes 4 and 5, the Dog was spoken to significantly more than the Legobot (p =  .003; p = .012).

Table 4. Amount of words spoken to each entity by minute (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Minute 1 5.08 (5.993)

4.30 (6.215)

2.11 (9.829)

.41 (2.117)

F(3,107) = 2.757, p = .046

ηp² = .072

Minute 2 30.54 (20.824)

17.63 (15.836)

14.00 (25.454)

2.81 (7.185)

F(3,107) = 10.019, p < .001

ηp² = .219

Minute 3 21.65 (18.049)

12.53 (13.726)

10.04 (18.238)

2.30 (5.312)

F(3,107) = 7.738, p < .001

ηp² = .178

Minute 4 22.46 (21.658)

9.87 (12.743)

11.86 (23.146)

4.78 (11.995)

F(3,107) = 4.520, p = .005

ηp² = .112

Minute 5 20.58 (20.068)

14.60 (20.272)

11.21 (19.978)

3.93 (16.031)

F(3,107) = 3.490, p = .018

ηp² = .089

There was a main effect of Time for Average Word Length, F(3.717, 382.899) = 17.893, p < .001, ηp² = .148. See Table 3 for means and standard devia-tions regarding average word length by minute, and the other speech attributes. The average word length used in Minute 1 was significantly shorter than in Min-utes 2 (p < .001), 3 (p < .001), 4 (p = .006) and 5 (p < .001). In addition, average word length in Minute 2 was significantly longer than in Minutes 4 (p = .004) and 5 (p = .003).

For number of questions posed to the entity, there was a main effect for Time, F(3.276, 337.460) = 7.286, p < .001, ηp² = .066. Significantly fewer questions were asked in Minute 1 than in Minutes 2 (p < .001), 3 (p = .001), 4 (p = .026), and 5 (p < .001). A Time × Entity interaction was found for questions, F(9.829, 337.460) = 2.187, p = .019, ηp² = .060. A series of One Way ANOVAs were run to examine the interaction; these ANOVAs and descriptive statistics are reported in Table 5. In Minutes 2 and 3, the Dog was asked significantly more questions than

Page 13: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

the AIBO (p = .026; p = .007), Legobot (p < .001; p < .001), and Cat (p = .027; p = .012). In Minutes 4 and 5, the Dog was asked significantly more questions than the Legobot (p = .011; p = .012).

Table 5. Amount of questions each entity was asked by minute (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Minute 1 .31(.679) .23(.568) .04(.189) .04(.192) F(3,107) = 2.443, p = .068

ηp² = .064

Minute 2 1.69(1.850) .70(1.264) .68(1.188) .07(.385) F(3,107) = 7.300, p < .001

ηp² = .170

Minute 3 1.46(1.985) .43(.971) .36(1.026) .07(.385) F(3,107) = 6.572, p < .001

ηp² = .156

Minute 4 .96(1.113) .57(1.006) .57(1.451) .04(.192) F(3,107) = 3.469, p = .019

ηp² = .089

Minute 5 1.38(1.651) .90(1.517) .79(2.132) .04(.192) F(3,107) = 3.427, p = .020

ηp² = .088

For number of commands given to the entity, there was a main effect for Time, F(3.206, 330.207) = 26.094, p < .001, ηp² = .202. Significantly fewer commands were given in Minute 1 than in any of the other minutes (all p < .001). In addition, more commands were given in Minute 2 than in Minutes 3(p = .001), 4 (p < .001) and 5 (p < .001). A Time × Entity interaction was found for a number of com-mands, F(9.618, 330.207) = 4.679, p < .001, ηp² = .120. A series of One Way ANOVAs were run to examine the interaction; these ANOVAs and descriptive statistics are reported in Table 6. In Minute 1, the Cat was given significantly more commands than the Legobot (p = .041). In Minute 3, the Dog was given signifi-cantly more commands than the Legobot (p = .004). The entities were different in Minutes 2 and 4, in that the Dog was given significantly more commands than the AIBO (p = .003; p = .014), Legobot (p < .001; p = .003) and Cat (p = .014; p = .017).

For amount of feedback given to the entity, there was a main effect for Time, F(3.144, 323.841) = 2.895, p = .033, ηp² = .039. Significantly less feedback was given in Minute 1 than in Minute 2 (p = .028).

For amount of times participants used the entity name as spoken by the experimenter (i.e. “Dog”, “Robot”, “Cat”) there was a main effect of Time, F(2.370, 244.117) = 4.093, p = .013, ηp² = .038. The entity name was said significantly less in Minute 4 than in Minute 2 (p = .046).

For amount of times the entity was called by a diminutive name there was a main effect of Time, F(2.784, 286.784) = 10.582, p < .001, ηp² = .093. Significantly

Page 14: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

more diminutive names were used in Minute 2 than in Minute 1 (p < .001), Minute 3 (p = .007), Minute 4 (p = .003) and Minute 5 (p < .001). There was a Time × Entity interaction, F(12,412) = 4.170, p < .001, ηp² = .108. A series of One Way ANOVAs were run to examine the interaction; these ANOVAs and descrip-tive statistics are reported in Table 7. There was a significant difference for Minutes 1, 2, 3, and 5, such that the Cat was called by more baby names than the AIBO (p = .006; p < .001; p = .002; p = .010) and Legobot (p = .007; p < .001; p = .001; p = .005). While the ANOVA indicated that the number of diminutive names used with the entities was different for Minute 4, when Bonferonni post-hocs were run no significant differences were found.

Table 6. Number of commands given to each entity by minute (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Minute 1 .31(.679) .87(2.113) .11(.567) .00(.000) F(3,107) = 3.094, p = .030

ηp² = .080

Minute 2 9.54(7.431) 4.83(5.389) 4.00(5.347) 1.33(3.913) F(3,107) = 9.731, p < .001

ηp² = .214

Minute 3 5.31(4.905) 3.43(5.124) 2.86(3.135) 1.22(3.215) F(3,107) = 4.250, p = .007

ηp² = .106

Minute 4 6.77(7.421) 2.40(5.090) 2.21(3.552) 1.44(4.790) F(3,107) = 5.358, p = .002

ηp² = .131

Minute 5 4.27(4.960) 2.23(3.794) 2.64(5.743) 1.15(4.148) F(3,107) = 2.004, p = .118

ηp² = .053

Table 7. Number of diminutive names used with each entity by minute (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Minute 1 .23(.430) .40(.770) .00(.000) .00(.000) F(3,107) = 5.339, p = .002

ηp² = .130

Minute 2 .85(1.279) 1.47(1.613) .14(.448) .00(.000) F(3,107) = 11.591, p < .001

ηp² = .245

Minute 3 .35(.689) .80(1.270) .07(.262) .00(.000) F(3,107) = 6.729, p < .001

ηp² = .159

Minute 4 .00(.000) .50(1.137) .39(1.133) .00(.000) F(3,107) = 2.832, p = .042

ηp² = .074

Minute 5 .23(.430) .53(1.042) .04(.189) .00(.000) F(3,107) = 4.995, p = .003.

ηp² = .123

Page 15: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

. Toy use

Participants were given 5 toys for use in Minutes 4 and 5 of the interaction. A 4(Entity) × 2 (Time) × 2 (Sex) Mixed ANOVA was run to examine the number of toys that were used. A Bonferonni correction was used to account for multiple comparisons in the within subjects variables, and Bonferonni post-hocs were used to examine significant differences in between subjects variables.

Entity. Significantly fewer toys were used with the Legobot (M = 1.45, SD  =  .832) than the AIBO (M = 2.10, SD = .762, p = .013), Dog (M = 2.57, SD = .770, p < .001) and Cat (M = 2.700, SD = .692, p < .001); F(3,103) = 14.596, p < .001, ηp² =  .298. Participants also used significantly more toys with the Cat than the AIBO, p < .025.

Time. There was a main effect of Time, F(1,103) = 5.494, p = .021, ηp² = .051 for number of toys used. Participants used significantly more toys in Minute 4 (M = 2.34, SD = 1.081) than in Minute 5 (M = 2.10, SD = 1.158), both p = .021.

. Attributions given to the entity

One Way ANOVAs were used to determine differences in attribution ratings given to each of the entities. After a significant ANOVA, Bonferonni post-hocs were used to determine which entities were rated significantly different from each other.

Entity. The Dog was rated as significantly more friendly than the Legobot (p  =  .027). Further, the Cat was rated as significantly less friendly and animate than the Dog (p < .001; p < .001), AIBO (p < .001; p < .001) and Legobot (p = .012; p = .005). The Dog was rated as significantly more trustworthy and approachable than the AIBO (p = .004; p = .04), Legobot (p = .003; p = .001) and Cat (p = .003; p < .001). In addition, the Cat was rated as significantly less approachable than the AIBO (p = .002). The Legobot was rated as significantly less intelligent than the AIBO (p = .043). The Dog was rated as significantly more reliable than the Legobot (p = .019). The Cat was rated as significantly less indecisive than the Lego-bot (p  =  .015). The Legobot was rated as significantly more aggressive than the AIBO (p = .004) and Dog (p = .005).While main effects were found for ratings of how attentive and how smart the entity was, when examined with Bonferonni post-hoc analyses, the entities were not found to be significantly different. See Table 8 for ANOVAs and descriptive statistics regarding each of the rated attri-butions for the entities.

Sex. There was a Entity × Sex interaction in regard to ratings of entity intelligence, F(3,103) = 6.607, p < .001 ηp² = .161. Females (M = 3.59, SD = 1.228) rated the Legobot as significantly more intelligent than did the males (M = 2.00, SD  = .943), F(1,25) = 12.363, p = .002 ηp² = .336. See Table 9 for means and standard deviations for this Entity × Sex interaction.

Page 16: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

There was a main effect of Sex in regards to ratings of entity aggressiveness, F(1,103) = 4.937, p = .028 ηp² = .046. Overall, males rated the entities as significantly more aggressive (M = 2.17, SD = 1.298) than females did (M = 1.61, SD = 1.184).

. Beliefs about the entities behavior

Uncooperative-Cooperative. A One Way ANOVA revealed a main effect of Entity for how uncooperative the participant thought the entity was, F(3,103) = 7.548, p < .001, ηp² = .180. Bonferonni post-hoc analyses revealed that the Cat was rated as significantly less cooperative (M = 1.70, SD = .915) than the Dog (M = 2.69, SD  =  .970, p = .002), Legobot (M = 2.52, SD = .935, p = .015) and the AIBO (M = 2.79, SD = 1.197, p < .001).

Table 8. Attribution ratings given to each entity (means, standard deviations and  ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Friendly 5.23 (1.505)

2.63 (1.245)

4.86 (1.604)

3.96 (1.911)

F(3,103) = 12.303, p < .001

ηp² = .264

Attentive 3.96 (1.732)

3.07 (1.596)

4.96 (1.319)

3.37 (1.801)

F(3, 103) = 2.954, p = .036

ηp² = .079

Trustworthy 4.54 (1.363)

3.13 (1.306)

3.14 (1.557)

3.11 (1.761)

F(3, 103) = 5.682, p = .001

ηp² = .142

Approachable 5.73 (1.638)

2.90 (1.626)

4.50 (1.528)

4.00 (1.710)

F(3,103) = 10.158, p < .001

ηp² = .228

Threatening 1.50 (.949)

2.03 (1.299)

1.79 (1.287)

2.00 (1.209)

F(3,103) = .826, p = .482

ηp² = .024

Intelligent 3.88 (1.177)

3.33 (1.295)

3.89 (1.343)

3.00 (1.359)

F(3,103) = 4.523, p = .005

ηp² = .116

Reliable 3.73 (1.116)

3.07 (1.258)

3.04 (1.105)

2.78 (1.086)

F(3,103) = 3.691, p = .014

ηp² = .097

Indecisive 3.73 (1.282)

3.47 (1.697)

4.25 (1.430)

4.74 (1.655)

F(3,103) = 2.991, p = .034

ηp² = .080

Smart 4.23 (1.366)

3.50 (1.480)

4.14 (1.380)

3.22 (1.577)

F(3,103) = 3.624, p = .016

ηp² = .095

Aggressive 1.50 (1.030)

1.60 (.932)

1.50 (.882)

2.59 (1.693)

F(3,103) = 4.632, p = .006

ηp² = .113

Animate 4.38 (1.722)

2.57 (1.455)

4.57 (1.425)

3.89 (1.188)

F(3,103) = 10.228, p < .001

ηp² = .230

Page 17: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities 1

Easy-Difficult. A One Way ANOVA revealed a main effect of Entity for how easy-difficult the participant thought the entity was, F(3,103) = 5.085, p =  .003, ηp²  =  .129. Bonferonni post-hoc analyses revealed that the Cat (M =  3.87, SD = 1.408) was rated as significantly less easy than the Dog (M = 2.77, SD = .908), p = .005, and Legobot (M = 2.56, SD = 1.311), p < .001. Further, the AIBO (M = 3.54, SD = 1.105) was rated as significantly less easy than the Legobot, p = .017.

Responsive-Unresponsive. A One Way ANOVA revealed a main effect of Entity for how responsive-unresponsive the participant thought the entity was, F(3,103) = 4.431, p = .006, ηp² = .114. Bonferonni post-hoc analyses revealed that the AIBO (M = 2.93, SD = 1.274) was rated as significantly more responsive than the Cat (M = 3.87, SD = 1.196), p = .021. Ratings of the Legobot (M = 3.78, SD = 1.086) and the Dog (M = 3.27, SD = 1.185) were not significantly different than those of the other entities.

. Beliefs about the capabilities of the entity

One Way ANOVAs were used to determine differences in ratings given to each of the entities in regards to their capabilities. After a significant ANOVA, Bonferonni post-hocs were used to determine which entities were rated significantly different from each other.

Entity. The Dog and Cat were rated significantly higher on the ability to recognize human faces, than the AIBO (p = .003; p = .018) and Legobot (both p < .001). Further, Legobot was rated significantly lower on the ability to recognize human faces than the AIBO (p = .013). The Dog (p = .002; p < .001; p < .001), Cat

Table 9. Means and standard deviations of ability and capability ratings that had a entity × sex interaction

Intelligence Capable of sensing touch

Successfully navigate

environment

Belief that the entity understood

the participant

Males Females Males Females Males Females Males Females

Dog 4.38 (1.188)

3.67 (1.138)

5.75 (1.282)

5.83 (1.383)

5.50 (.926)

5.72 (1.074)

3.63 (1.188)

2.94 (1.862)

Cat 4.00 (1.000)

3.05 (1.322)

6.00 (1.118)

3.81 (1.887)

5.22 (1.394)

3.48 (1.914)

3.00 (1.581)

2.29 (1.146)

AIBO 3.22 (1.481)

4.21 (1.182)

3.89 (1.833)

3.47 (1.429)

5.11 (1.616)

4.95 (1.268)

3.22 (1.563)

3.16 (1.573)

Legobot 2.00 (.943)

3.59 (1.228)

3.10 (1.287)

3.88 (2.147)

1.70 (1.252)

3.88 (2.058)

1.22 (.667)

2.94 (1.784)

Page 18: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

(p = .007; p < .001; p < .001), and AIBO (p = .030; p = .001; p < .001) were rated as significantly more visually aware of their surroundings, capable of interpreting human emotion, capable of expressing its own emotion, than Legobot. In addition, the Dog was rated as significantly more capable of interpreting human emotion than the Cat (p = .036) and AIBO (p = .034). Further, the Dog was rated as signifi-cantly more capable of expressing its own emotion than the Cat (p = .032). The Dog was rated significantly higher in regards to the capability to carry out its own inten-tions than the Cat (p = .001), AIBO (p = .031), and Legobot (p < .001). Further, the Legobot was rated as significantly less capable of carrying out its own inten-tions than the AIBO (p = .003). The Dog and Cat were rated significantly higher on being capable of thought, than the AIBO (p < .001; p = .023) and Legobot (both p < .001). The Dog was rated as being significantly more capable of understanding language than the Legobot (p = .007) and Cat (p = .017). The Legobot was rated as significantly less capable of hearing sound than the Dog (p < .001), Cat (p = .001) and AIBO (p < .001). The Dog was rated as significantly more capable of sens-ing touch than the AIBO (p < .001), Legobot (p < .001) and Cat (p = .018). The Legobot was rated as significantly less capable of navigating its own environment than the AIBO and Dog (both p < .001). The Cat was rated as significantly less capable of navigating its own environment than the Dog (p = .001). While a main effect was found for the belief that the entity understood the participant, when examined with a Bonferonni post-hoc analysis, the entities were not found to be significantly different. See Table 10 for means, standard deviations and ANOVA tests regarding each of the rated capabilities for each entity.

Sex. For each interaction found, One Way ANOVAs were run in order to examine the interaction. Bonferonni post-hocs were used to determine significant differences.

There was a Entity × Sex interaction for belief that the entity can express emotion, F(3, 103) = 3.408, p = .02, ηp² = .090. Males gave significantly different ratings as a function of entity, F(3, 32) = 28.101, p < .001 ηp² = .725. Males rated the Dog (M = 6.00, SD = 1.069) as significantly more able to express emotion than the Cat (M = 4.11, SD = .928, p < .001) and Legobot (M = 1.50, SD = .972, p < .001). For males, the Legobot was rated significantly lower than all other entities on this ability (all p < .001), while their ratings of the AIBO (M = 4.67, SD = 1.323) and Cat were not significantly different from each other.

Females showed a different pattern, F(3,71) = 6.243, p = .001, ηp² = .209. Females rated Legobot (M = 2.65, SD = 1.656) as significantly less able to express emotion than the Dog (M = 4.56, SD = 1.504, p = .002), and AIBO (M = 4.58, SD = 1.121, p = .002). Females did not rate the Cat (M = 3.86, SD = 1.711) as significantly different than any of the other entities in regards to its ability to express emotion.

Page 19: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

Tabl

e 10

. Be

liefs

abo

ut th

e ca

pabi

litie

s of e

ach

entit

y (m

eans

, sta

ndar

d de

viat

ions

and

AN

OVA

test

s)

Dog

Cat

AIB

OLe

gobo

tSi

gnifi

canc

ePa

rtia

l eta

squa

red

Reco

gniz

e Fa

ces

4.77

(1.4

23)

4.53

(1.6

13)

3.43

(1.3

99)

2.26

(1.0

95)

F(3,

103)

= 2

0.75

9, p

< .0

01ηp

² = .3

77U

nder

stan

d G

estu

res

3.54

(1.5

29)

3.33

(1.5

16)

3.61

(1.6

85)

2.89

(1.8

05)

F(3,

103)

= 1

.662

, p =

.180

ηp² =

.046

Vis

ually

Aw

are

of

Surr

ound

ings

4.96

(1.5

62)

4.73

(1.6

60)

4.54

(1.6

66)

3.19

(2.1

31)

F(3,

103)

= 6

.975

, p <

.001

ηp² =

.169

Inte

rpre

t Em

otio

n4.

19 (1

.470

)3.

20 (1

.375

)3.

18 (1

.467

)1.

74 (1

.059

)F(

3,10

3) =

15.

372,

p <

.001

ηp² =

.309

Expr

ess E

mot

ion

5.00

(1.5

28)

3.93

(1.5

07)

4.61

(1.1

66)

2.22

(1.5

28)

F(3,

103)

= 2

3.72

2, p

< .0

01ηp

² = .4

09C

arry

out I

nten

tions

5.12

(1.3

66)

3.57

(1.4

06)

4.00

(1.6

56)

2.59

(1.2

79)

F(3,

103)

= 1

3.44

9, p

< .0

01ηp

² = .2

81Th

ough

t4.

88 (1

.683

)4.

13 (1

.613

)2.

93 (1

.274

)2.

30 (1

.683

)F(

3,10

3) =

15.

266,

p <

.001

ηp² =

.308

Resp

ondi

ng to

Ver

bal

Com

man

ds3.

73 (1

.687

)2.

87 (1

.456

)3.

93 (1

.720

)3.

11 (1

.423

)F(

3,10

3) =

2.2

95, p

= .0

82ηp

² = .0

63

Und

erst

andi

ng L

angu

age

4.19

(1.6

25)

2.97

(1.4

02)

3.61

(1.6

18)

2.81

(1.3

60)

F(3,

103)

= 4

.093

, p =

.009

ηp² =

.107

Hea

ring

Sou

nd5.

77 (1

.796

)5.

47 (1

.634

)5.

57 (1

.597

)3.

70 (1

.750

)F(

3,10

3) =

9.8

68, p

< .0

01ηp

² = .2

23Se

nsin

g To

uch

5.81

(1.3

27)

4.47

(1.9

61)

3.61

(1.5

48)

3.59

(1.8

86)

F(3,

103)

= 1

0.22

4, p

< .0

01ηp

² = .2

29N

avig

atin

g its

En

viro

nmen

t5.

65 (1

.018

)4.

00 (1

.930

)5.

00 (1

.361

)3.

07 (2

.074

)F(

3,10

3) =

14.

985,

p <

.001

ηp² =

.304

Belie

f tha

t “Th

e en

tity

unde

rsto

od m

e”3.

15 (1

.690

)2.

50 (1

.306

)3.

18 (1

.541

)2.

35 (1

.696

) F(

3,10

3) =

3.1

55, p

= .0

28ηp

² = .0

85

Page 20: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

There was an Entity × Sex interaction for belief that the entity had the capabil-ity to sense touch, F(3,103) = 3.740, p = .013, ηp² = .098. Females rated the Cat sig-nificantly lower (M = 3.81, SD = 1.887) than the males did (M = 6.00, SD = 1.118) on the ability to sense touch, F(1,28) = 10.419, p = .003, ηp² = .271.

There was an interaction of Entity × Sex for belief that the entity had the ability to successfully navigate the environment, F(3, 103) = 6.873, p < .001, ηp²  =  .167. Males rated the Cat significantly higher (M = 5.22, SD = 1.394) than did the females (M = 3.48, SD = 1.914), F(1,28) = 6.056, p = .020. Females rated the Legobot (M = 3.88, SD = 2.058) significantly higher than males did (M = 1.70, SD = 1.252), F(1,25) = 9.158, p = .006, ηp² = .268. There also was an interaction of Entity × Sex for belief that the entity understood the participant, F(3,103) = 3.362, p = .022, ηp² =  .359. Females rated the Legobot (M = 2.94, SD = 1.784) as significantly more able to understand the participant than did males (M = 1.22, SD = .667), F(1,24) = 7.658, p = .011, ηp² = .242. See Table 9 for means and standard deviations regarding the ratings the two sexes gave on these capabilities.

. Mood ratings

Time. A 4 (Entity) × 2 (Sex) × 7 (Time of Mood Rating) Mixed ANOVA was run to examine differences in ratings of mood. As the assumption of sphericity was violated, the Greenhouse-Geisser test statistic is reported in regards to Time of Mood Ratings, X²(20) = 209.788, p < .001, ε = .51. This analysis yielded a main effect for time of mood rating, F(3.061,299.990) = 9.342, p < .001, ηp² = .087. Time 1 (M = 73.35, SD = 16.680) was rated as significantly higher than Time 6 (M  =  65.69, SD = 18.540, p = .014) and Time 7 (M = 63.26, SD = 19.409, p < .001). In addition, Time 7 was also rated significantly lower than Times 2, 3, and 4 (M = 71.32, SD = 16.799, p = .001; M = 71.97, SD = 16.431, p < .001; M = 70.12, SD = 16.864, p = .003). Time 3 was significantly higher than Time 6 (p = .021). Time 5 (M = 67.94, SD = 17.604) was not significantly different than any of the other times. In general, as time progressed the mood ratings decreased. There were no significant differences of mood ratings in regards to condition or sex.

. Amount of time entity spent in the participant’s box

A 5 (Time) × 4 (Entity) × 2 (Sex) Mixed ANOVA was run to examine any dif-ferences in regard to how many seconds the entity spent near the participant per minute of interaction. There were no significant differences for the amount of time

Page 21: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

the entity spent in the participant’s box in regards to the sex of the participant, F(1, 103) = 0.415, p = .521, ηp² = .004.

Entity. There was a main effect for Entity, F(3, 103) = 246.557, p < .001, ηp² =  .878 in regards to how much time was spent in the participant’s box. The Dog (M = 48.63s, SD = 8.019) spent significantly more time close to the partici-pants than the Legobot (M = 18.31s, SD = 5.8607, p < .001), AIBO (M = 4.15s, SD = 7.612, p < .001), and Cat (M = 1.85s, SD = 3.668, p < .001). The Legobot spent significantly more time close to the participants than the AIBO and Cat (both p < .001). There was no significant difference between the AIBO and Cat in regards to the amount of time spent in the participant’s box.

Time. As the assumption of sphericity was violated, the Greenhouse-Geisser test statistic is reported in regards to the within subjects elements of time the entity spent in the participant’s box, X²(9) = 31.487, p < .001, ε = .862.

There was a Time × Entity interaction for the amount of time the entity spent in the participant’s box, F(10.340,355.000) = 3.253, p < .001, ηp² = .087. A series of One Way ANOVAs were run in order to examine the interaction. Bon-feronni post-hocs were used to determine significant differences. In Minutes 1 through 5, the Dog spent significantly more time near the participant than the AIBO (p < .001; p < .001; p < .001; p < .001; p < .001), Legobot (p < .001; p < .001; p < .001; p < .001; p < .001), and Cat (p < .001; p < .001; p < .001; p < .001; p < .001). Further, for Minutes 1 through 4 the Legobot spent significantly more time near the participant than the AIBO (p < .001; p < .001; p < .001; p < .001) and Cat (p = .020; p < .001; p < .001; p < .001). See Table 11 for Means, Standard

Table 11. Number of seconds spent in the participant’s box for each entity by minute (means, standard deviations and ANOVA tests)

Dog Cat AIBO Legobot Significance Partial eta squared

Minute 1 45.46 (14.590)

5.07 (10.632)

1.07 (5.669)

13.76 (11.308)

F(3,107) = 90.655, p < .001

ηp² = .718

Minute 2 43.87 (18.084)

1.83 (5.498)

1.0714 (5.669)

17.57 (17.332)

F(3,107) = 65.303, p < .001

ηp² = .647

Minute 3 53.10 (13.407)

.4167 (2.282)

2.16 (6.910)

15.129 (15.975)

F(3,107) = 137.280, p < .001

ηp² = .794

Minute 4 47.52 (15.535)

1.60 (6.400)

6.89 (15.961)

27.28 (20.618)

F(3,107) = 51.170, p < .001

ηp² = .589

Minute 5 53.23 (13.000)

.333 (1.826)

9.57 (21.179)

17.83 (17.091)

F(3,107) = 64.475, p < .001

ηp² = .644

Page 22: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

Deviations, and ANOVA tests relating to the time the entity spent in the par-ticipant’s box.

While there was a significant difference for time the entity spent in the partici-pant’s box, F(3.447, 355.000) = 2.724, p = .037, ηp² = .026, Bonferonni post-hocs did not reveal that any minutes were significantly different from each other.

. Discussion

.1 Summary of the results

Overall the Dog was often rated higher than and spoken to more than the other entities. The Dog was considered to be the most trustworthy and most approach-able entity, and most able to interpret human emotion, sense touch, as well as carry out intentions. Overall, the Dog was spoken to more than the Legobot. Fur-ther, the Dog was asked more questions, and given more commands than both the robots. The Dog was given significantly more feedback than all other entities, including the Cat. One reason for these differences from other entities may be the level of proximity to and interactivity the Dog had with the participant. From both observation and analysis it was determined that the Dog was frequently near the participant and spent more time in the participant’s box than any of the other entities. This may identify proximity as an important factor to take into account during a human-entity interaction.

Males and females gave different attribution and capability ratings. Sex of the participant was not found to predict actual external behavior (amount spoken, amount of toys used), however it was found to affect ratings of attributions and capabilities. A pattern emerges, such that females rated the Legobot as more intel-ligent, and more capable of navigating the environment than males did. Further, females believed that Legobot understood them significantly more than males felt it understood them. In regard to the Cat, males rated it as more able to sense touch and navigate the environment than females did. These differences in internal beliefs may demonstrate that males and females have different expectations of the live and robotic entities. Bernstein and Crowley (2008) found that those who had more experience with robots made more nuanced attributions than those who knew less about them. During the demographics portion of the current study, male participants reported having more previous interactions with robots than female participants. One might argue that males may have more experience with robots, and therefore are giving ratings that are more consistent with and specific to their actual capabilities. Future research should examine the difference between

Page 23: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

males and females in the attributions they give to live entities and robots. The Cat spent less time in close proximity to the participants than the Legobot. This may indicate that the proximity of an entity may be more important to the judgments females make in a human-entity interaction than the judgments males make in the same situation. It should also be noted that there was no interaction of sex and the amount of time the entity spent in the participant’s box, therefore it appears that females may have interpreted a similar entity behavior differently than males did.

Time and activity matters to the interaction. Before instructions were given, significantly less was spoken, and word length was shorter than during the other tasks. When participants were determining entity capabilities with and without toys, the Dog was spoken to more than any other entities. Overall, more was spo-ken in the first minute of capability assessment than in any other minute of the interaction. This may imply that when participants first tried to find out entity capabilities they spoke more. The statistically more proximal entity (the Dog) con-tinued to elicit more speech throughout the interaction. Word length was longer when participants were simply assessing capabilities. Once toys were introduced, word length reduced. Questions, Commands and Feedback tended to increase once directions were given to the participant. In the first minute of the capabilities interaction, and first minute of the toy interaction, the Dog was given significantly more commands than all other entities.

In the first minute of the toy interaction, more toys were used than in the sec-ond minute. Similar to the amount of spoken words, it appears that participants may rely on their assumptions about the entity in the first minute with toys. They then reduce the number of toys used in the second minute. In general, ratings of one’s own mood dropped over time, with more positive ratings at the beginning of the experiment, and more negative ratings towards the end.

Sometimes a robotic entity was treated similarly to a live one, and in other cases it was not. The Cat and AIBO had many instances in which they were not significantly different from each other in regard to beliefs about attributions and capabilities. The Legobot was rated as significantly less intelligent and less capa-ble of carrying out its own intentions than the AIBO. In many cases the Legobot was rated significantly lower than all the other entities. It is possible that when participants first interact with an entity, they are bringing their own expectations based on what it looks like, and their previous experience. AIBO was designed to look like a Dog, whereas the Legobot was not. It may be that the Legobot is not passing the initial hurdle that likens it to a real entity, and thus is rated as less capable.

Overall, the Dog, which spent the most time in proximity to the participant, was given more commands, feedback and asked more questions than the robots.

Page 24: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

The Cat, while it was a live entity, was not spoken to significantly differently than either of the robots. Further, the Cat was rated significantly lower than the other entities in regards to how friendly and animate it was. Therefore, the lack of prox-imity and interactivity of the Cat may be responsible for its being rated lower on these attributes, and being spoken to more similarly to the robots than the other live entity. In regard to word length, the Dog, Cat and AIBO were spoken to with longer words than the Legobot. This finding implies that participant’s initial beliefs about the Legobot may be different than the live entities, and the AIBO (despite the fact that they all have the ability to respond to visual and auditory stimuli).

Behavior and appearance of the entities. During the experiment the Dog would run, jump, and engage with the participant, while the Cat tended to move toward a far corner of the room. The Legobot would move through the room, stop-ping when noise was heard. The AIBO would engage with some participants, or in some cases would walk away from them. The AIBO would walk, wag its tail, and occasionally express emotion through its visual interface. These levels of interac-tivity may have had an impact on the beliefs that participants held about each of the entities.

The appearance of the entities may also have had an impact on ratings given to the entities. The Legobot was rated as more aggressive than the Dog and AIBO, which is not surprising, as it had a claw. The Legobot may also have been given lower ratings on some of the perceptual capabilities, as it did not have approxi-mations of traditional eyes or ears visually present on it. However, Jipson and Gelman (2007) found that adults rated starfish as being alive despite their absence of perceptual capabilities, which is consistent with other ratings that Legobot received (females believing that Legobot was intelligent, and indecision being attributed to it).

. Theory

The results indicate that adding characteristics similar to a live entity does not nec-essarily mean a robot will be treated like a live entity. In addition, it does not mean it will be treated as though it is unintelligent. Further, there appears to be both a visual appearance component, and behavioral component that influence the beliefs a person has about an entity. It appears that in order to be treated like a live entity, the entity must visually resemble one. The AIBO meets this requirement, as it resembles a dog more closely than the Legobot, which was therefore spo-ken to less frequently, and rated lower in many attributions and capabilities. Once this resemblance requirement has been met, proximity appears to be important to forming beliefs about an entity. For instance, the Dog, which spent the most time near the participant was often rated higher and spoken to more than the other

Page 25: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities

entities regardless of their organic nature. It also appears that initial beliefs may override proximity, as the Legobot, which was the second most proximal entity, was rated lower than the Cat and AIBO on many aspects. The fact that the Cat and AIBO tended to stay further away from the participant may partially explain why their ratings are similar. Perhaps, once a robot passes the initial resemblance test of looking like a live entity, beliefs are then based on its proximity and interactivity. This would explain why the two less interactive entities (Cat and AIBO) were rated similarly, and the Dog was often rated highest. Therefore, it is important not only to create a robot that looks like a live entity, but also acts like one and is interactive.

Further, the individual difference of sex has been identified as an important factor to take into account when comparing live and robotic entities. While the behavior of males and females appeared to be similar, the actual beliefs they devel-oped about the entity varied. Therefore, it cannot be assumed that everyone will hold the same beliefs about a robotic entity after the same interactive session.

. Limitations and future directions

As actual live entities were used in the experiment, there was some variability in the actions of the entities between participants. The animals and entities did not exhibit the same exact behavior with each and every participant. However, it has been shown previously that even when the behavior of a dog and robotic dog was identical, different attributions were given to them (Pepe et al. 2008).

Again, as a result of working with live entities, for practical reasons a between subjects design was utilized. While this design does not allow for direct compari-son of the beliefs an individual holds between the entities, it is more representative of real world experience. In the real world a person is likely to interact with only a robotic dog, or only a real dog. It is unlikely that they will be interacting with multiple different types of entities in quick succession. As a future step, a within subjects design would be beneficial, as it would add to the literature on human-entity interaction while directly comparing the beliefs of the same participants. As with many studies in the field of psychology, the sample in the current study is pre-dominantly composed of young females, which may limit the generalizability of the results. Despite this, the current study adds to the literature in regards to the beliefs people have about both live and robotic entities, as well as how they are expressed in an interactive situation.

In general, the Cat tended to stay further away from the participant than the Dog and Legobot. This lack of interactivity may have revealed an interesting pat-tern, as the AIBO and Cat were often treated similarly. It would be advantageous to expand on the study by also using a more interactive cat, and see if consistent results are found. Further, the use of a robotic cat would allow more investigation

Page 26: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

of the importance of resemblance between the live and robotic entity. There should be further investigation into the role that proximity and interactivity plays in the beliefs that one has about an entity.

. Conclusions

This study supports the idea that there are different factors that play into what a person believes about the capabilities of a robot. The physical appearance appears to interact with the proximity and level of interactivity of the entity. It is important to note that even though external behavior may appear consistent between people, the beliefs and attributions that are given to entities can differ. The results of this study suggest that just because it looks like a dog, walks like a dog and acts like a dog, it does not necessarily mean a person will treat it like a dog.

Acknowledgements

This work was partially supported by a grant from RDECOM entitled, “Team Formation and Optimization in Human-Intelligent Agent Teams.” We wish to thank Jennifer Scott, Ariel Baruch Laing, Gabriella Hancock, Nicholas Lagattuta, Catherine Mobley, Matthew Marraffino, Melissa Raymond, and Mark Spitzer for research assistance.

References

Bartneck, C. & Hu, J. (2008). Exploring the abuse of robots. Interaction Studies, 9(3), 415–433.Bernstein, D., & Crowley, K. (2008). Searching for signs of intelligent life: An investigation of

young children’s beliefs about robot intelligence. Journal of the Learning Sciences, 17(2), 225–247.

Bethel, C.L., Salomon, K., & Murphy, R.R. (2009). Preliminary results; humans find emotive non-anthropomorphic robots more calming. Proceedings of HRI’09, March 11–13, 2009, La Jolla, CA.

Dautenhahn, K. (2003). Roles and functions of robots in human society: Implications from research in autism therapy. Robotica, 21, 443–452.

Fong, T., Nourbakhsh, I., & Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42, 143–166.

Goff, L.G., Sims, V.K., & Chin, M.G., (July 2005). Preschoolers’ Interactions with Live and Robotic Dogs. Poster presented at the annual meeting of the International Society for Anthrozool-ogy, Niagara Falls, New York.

Hinds, P.J., Roberts, T.L., & Jones, H. (2004). Whose job is it anyway? A study of human-robot interaction in a collaborative task. Human-Computer Interaction, 19, 151–181.

Page 27: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

The effect of appearance on interaction with entities 1

Jipson, J.L., & Gelman, S.A. (2007). Robots and rodents: Children’s inferences about living and nonliving kinds. Child Development, 78, 1675–1688.

Kahn Jr., P.H., Friedman, B., Perez-Granados, D.R., & Freier, N.G. (2006). Robotic pets in the lives of preschool children. Interaction Studies, 7(3), 405–436.

Kerepesi, A., Kubinyi, E., Jonsson, G.K., Magnusson, M.S., & Miklosi, A. (2006). Behavioural comparison of human-animal (dog) and human-robot (AIBO) interactions. Behavioural Processes, 73, 92–99.

Lee, S., Yee-man, I., Kiesler, S., & Chiu, C. (2005). Human mental models of humanoid robots. Proceedings of the 2005 International Conference on Robotics and Automation (ICRA 2005), 2767–2772.

Melson, G.F., et al. (2009). Children’s behavior toward and understanding of robotic and living dogs. Journal of Applied Developmental Psychology, 30, 92–102.

Ogden, B., & Dautenhahn, K. (2000) Robotic etiquette: Structured interaction in humans and robots. In Proc. 8th Symp. on Intelligent Robotic Systems (SIRS 2000), The University of Reading, England, 18–20 July 2000.

Pepe, A.A., Upham Ellis, L. Sims, V.K., & Chin, M.G. (2008). Go, dog, go: Maze training AIBO vs. a live dog, an exploratory study. Anthrozoös, 21(1), 71–83.

Robins, B., Dautenhahn, K., & Dubowski, J. (2006). Does appearance matter in the interaction of children with autism with a humanoid robot? Interaction Studies, 7(3), 479–512.

Sims, V.K., & Chin, M.G. (2002). Responsiveness and perceived intelligence as predictors of speech addressed to cats. Anthrozoös, 15, 166–177.

Tamura, T., et al. (2004). Is an entertainment robot useful in the care of elderly people with severe dementia? Journal of Gerentology: Medical Sciences, 59A(1), 83–85.

Turkle, S., Taggart, W., Kidd, C.D., & Daste, O. (2006). Relational artifacts with children and elders: The complexities of cybercompanionship. Connection Science, 18(4), 347–361.

Wada, K., & Shibata, T. (2008). Social and physiological influences of robot therapy in a care house. Interaction Studies, 9(2), 258–276.

Authors’ addresses

Anne M. Sinatra (corresponding author) Department of PsychologyCollege of SciencesP.O. Box 161390Orlando, FL 32816–1390USA

[email protected]

Matthew G. Chin Department of PsychologyCollege of SciencesP.O. Box 161390Orlando, FL 32816–1390USA

[email protected]

Valerie K. SimsDepartment of PsychologyCollege of SciencesP.O. Box 161390Orlando, FL 32816–1390USA

[email protected]

Heather C. LumResearch Associate, PsychologySchool of Humanities and Social SciencesPenn State Erie, The Behrend College4951 College DriveErie PA, 16563USA

[email protected]

Page 28: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Anne M. Sinatra, Valerie K. Sims, Matthew G. Chin & Heather C. Lum

Authors’ biography

Anne M. Sinatra, Ph.D. graduated from the Applied Experimental and Human Factors Psychology Ph.D. program at the University of Central Florida. Valerie Sims, Ph.D. is an Asso-ciate Professor in UCF’s AEHF Psychology program, and Matthew Chin, Ph.D. is an Instructor in the Psychology Department at UCF. Heather C. Lum, Ph.D. graduated from the University of Central Florida and is currently a lecturer and lab coordinator at Penn State Erie, The Behrend College. The authors are all members of the Applied Cognition and Technology (ACAT) lab. Their research interests include Cognition, Human-Robot Interaction, Human-Animal Interac-tion and Anthropomorphism.

Page 29: If it looks like a dog: The effect of physical appearance on human interaction with robots and animals

Copyright of Interaction Studies is the property of John Benjamins Publishing Co. and its content may not be

copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

permission. However, users may print, download, or email articles for individual use.