Top Banner

of 18

When the chips are own: Social and technical aspects of computer failure and repair

May 30, 2018

Download

Documents

quaylem
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    1/18

    When the chips are down: Social and technicalaspects of computer failure and repair

    Michael Quayle *, Kevin Durrheim

    School of Psychology, University of kwaZulu-Natal, Private Bag X01, Scottsville 3209, South Africa

    Received 25 April 2005; received in revised form 12 March 2006; accepted 14 March 2006

    Abstract

    This paper explores computer failure as a social event by examining recorded interactions between

    computer users and help-desk consultants (technicians). It was found, first, that the nature of a failure

    was negotiated between participants rather than being simply technically evident. Failure was

    defined from users perspectives, in relation to what they were trying to achieve, rather than

    according to technical parameters. Secondly, negotiations of failure had social consequences for bothusers and help-desk consultants. Both avoided being seen as incompetent and actively defended their

    social standing. Thirdly, such social issues sometimes took precedence over technical and practical

    ones. The implications for HCI theorists and practitioners are twofold: firstly, failure should be

    accepted as a regular part of computer use in which humancomputer interaction continues even

    though the interface may be non-functional. Secondly, the management of failure could be better

    addressed if technicians were trained in social as well as technical intervention skills.

    q 2006 Published by Elsevier B.V.

    Keywords: HCI; Human factors; Reliability; Help-desk; Social aspects; Dependable computing; Computer failure

    1. Introduction

    This paper explores a commonplace aspect of computer use that is a frequent vexation

    to userscomputer failure. Anecdotal evidence suggests that almost anyone who uses

    computer technology on a regular basis experiences failure with some degree of regularity.

    A survey undertaken by PCWorld magazine, with approximately 16,000 respondents,

    Interacting with Computers xx (2006) 118

    www.elsevier.com/locate/intcom

    0953-5438/$ - see front matter q 2006 Published by Elsevier B.V.

    doi:10.1016/j.intcom.2006.03.003

    * Corresponding author. Tel.: C27 33 2605016; fax: C27 33 2605809.

    E-mail address: [email protected] (M. Quayle).

    ARTICLE IN PRESS

    http://www.elsevier.com/locate/intcommailto:[email protected]:[email protected]://www.elsevier.com/locate/intcom
  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    2/18

    found that, in July 2000, 56.6% of Personal Computers in American homes had at least one

    problem (Jones, 2000). Even though it is not perfectly clear what is meant by a

    problem, these figures make it clear that computer failure is a common experience for the

    average user. Dvorak (2003) quotes Bill Gates, Chairman of Microsoft, as saying that 5%of Windows machines crash twice a day. Dvorak uses this rare statement of reliability

    from Microsoft to build a flimsy, although not implausible, argument to estimate that

    there are a minimum of 30 billion Windows system crashes a year (p. 1). If Dvoraks

    figures are credible, then the theoretical average PC crashes approximately once a week.

    This figure does not seem unreasonable when weighed against personal and anecdotal

    experience. The BBC News quotes Symantic, an internet security company, as estimating

    that nine out of ten [users] are regularly annoyed by slow, crashing machines (BBC

    News, 2003a). Other studies have shown Windows NT and 2000 machines have

    abnormal reboots (e.g. after a blue screen) approximately once a month on average

    (Simache et al., 2002). It seems that failure is a common (and possibly unavoidable) aspectof interaction with computers and therefore deserves analytic attention.

    Even though modern HCI is often described as user-centred, the experience of

    computer failure has been vastly under-investigated in the HCI literature in relation to its

    empirical frequencyin spite of the fact that most HCI researchers, in their roles as users,

    probably experience failure quite frequently. Of course, researchers have studied the issue

    of computer reliability in great detail, where reliability is generally conceptualised as an

    absence of programming mistakes, errors and bugs and the greatest possible degree of

    system flexibility and durability. The broad project has been to improve design practices to

    limit error, and therefore to increase reliability to the greatest extent possible (e.g. Carroll,

    1997; Lewis and Norman, 1995; Neumann, 1993; Norman, 1990). In this mode of

    research, failure can be conceptualised as something that happens when reliability does

    notin other words, computer failure is understood as a primarily technical affair.

    Continuing to understand failure as an empirical and objective occurrence allows two

    misconceptions to endure: the first is that computer failure is a phenomenon that will be

    eradicated over time by the rigorous application of sound design principles and is therefore

    not worth investigating in its own right. For example, Nickerson (1986) asserted that

    increasing reliability is a trend of modern computers. However, in spite of great

    advancements in design techniques and programming practices, the experience of

    computer failure is still commonplace. Nearly two decades later, computer technologyshows few signs of outgrowing failure. There are clearly limits to software and hardware

    reliability, particularly in low-cost and increasingly complex systems such as the personal

    computers used by the majority of computer users (Enfield, 1987; Lieberman and Fry,

    2001). Within these constraints, it is clear that users of consumer computer products will

    still experience failure on a regular basis for the foreseeable future (Das, 2003; Siewiorek

    et al., 2004). The second misconception is that real-world computer failure is a temporary

    hiatus or intermission in a users interaction with a computer system; that interaction is

    paused until the computer system is returned to functionality and HCI can continue.

    Although this may be true if the observers horizons are limited to interaction with a

    functioning user interface, the interaction with and around the computer often continuesafter a failureand often with higher intensity. To take two extreme examples, the BBC

    reports that one frustrated victim of computer failure pulled out a gun and shot his

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 1182

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    3/18

    malfunctioning laptop, while another threw his computer against a wall in exasperation

    (BBC News, 2003b). Although this is not interaction as it is generally conceptualised in

    the HCI literature, these examples make it abundantly clear that humancomputer

    interaction may continue even (and perhaps especially) when computerhuman interactionfails.

    Nevertheless, computer failure, like reliability, is difficult to define or measure. It is

    tempting to define failure technically, by comparing actual system behaviour with a

    predefined blueprint of acceptable limits for system functionality, for example, in the

    IEEE/IFIP definition of failure as a system not performing up to its specifications. But

    users do not have access to any such blueprints of acceptable functionality, and

    increasingly often, such blueprints do not exist (Siewiorek et al., 2004). Subjectively, a

    failure occurs when a computer does not perform as a user expects or needs it to and there

    is an increasing realization in HCI literature that failure should be defined experientially

    (e.g. Murphy and Levidow, 2000; Siewiorek et al., 2004). Moreover, when more than oneperson is involved in a failure situation then the definition of failure is not just subjective,

    but becomes a matter for negotiation, as will be shown in this analysis.

    The available literature shows that it is not unusual for end users to turn failure into a

    social event by approaching friends, family or colleagues for help with their computer

    problems. In fact, they are far more likely to approach people they know than the help-

    desk experts whose job it is to help them (Bannon, 1986; Eveland et al., 1994; Lee, 1986;

    MacLean et al., 1990). For example, in a study of household computer-use, participants

    were provided with user-friendly computer equipment (a Macintosh), training, and easy

    access to a help-desk support line. Even in these ideal circumstances people were more

    likely to approach knowledgeable friends or family members for help than to call the help-

    desk or, failing this, to nominate the knowledgeable other to call the help-desk on their

    behalf (Kiesler et al., 2000).

    Even though the empirical studies in the area show that that end users generally respond

    to computer problems and challenges socially, support is more usually approached in the

    literature as a component of information systems delivery (e.g. Flew and Kneath, 1994;

    Shah and Bandi, 2003), andless oftenas an aspect of HCI (Lundgren, 1998; Selber

    et al., 1996) and usability (e.g. Bharati and Berg, 2003). The bulk of help-desk and support

    research has focused on how help-desk functions can be aided (e.g. Abraham et al., 1991;

    Gonzalez et al., 2005; Kriegsman and Barletta, 1993; Marom and Zukerman, 2005; Sauter,2004) by information support technologies, such as case-based and AI systems, although

    some (e.g. Rimmer and Wakeman, 1999) have cautioned that such systems may

    complicate rather than improve support functions. However, the few studies that have

    analyzed the work of help-desk consultants in fine detail have argued that technical

    support is inherently driven by social interaction rather than codified knowledge (e.g.

    Barley, 1996; Das, 2003; Pentland, 1992). For example, Das (2003) concludes that the

    mix of moves exercised in technical support strongly depends on the formulation of tasks

    by those requesting support (from abstract) and Pentland (1992) argues that the

    knowledge demonstrated by support technicians is a situated performance rather than an

    abstract representation (p. 527).The present paper extends these analyses to show how computer failure may be as

    much a social affair as it is a technical problem, and sheds some light on the kinds

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 3

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    4/18

    of interactional intricacies that end users may be faced with after requesting help from

    computer support staff. To do so, interactions between users and Help-Desk Consultants

    (HDCs) will be analysed in detail.

    2. Conversations about computer failures

    The present analysis rests on three simple maxims that will be familiar to discourse or

    conversation analysts: first, that ordinary talk is an indispensable means of managing

    social life, second, that speakers are generally held accountable for their utterances, and,

    third, that speakers choose their utterances with careful reference to their own social

    accountability (Edwards and Potter, 1993). It is widely agreed that the exact expression of

    an account matters and that one turn of phrase cannot be substituted for another without a

    subtle (or not so subtle) shift in meaning (e.g. Antaki, 1994; Edwards and Potter, 1993). Onthe contrary, language is flexible and a state of affairs may be explained in virtually any

    manner if enough social support can be produced for its defence (Gergen and Gergen,

    1980, p. 201).

    As a result, conversation cannot be considered as simple information exchange,

    because the way that people talk speaks volumes about their knowledge, abilities and

    social position. Gergen and Gergen (1980) argue that talk is a means of advancing ones

    moral career and social standing (after Goffman, 1959, 1961, 1963). People are

    heavily invested in the outcomes of their talk and neither their choice of words nor

    their silences are incidental to some underlying cognitive mechanism; people deploy

    language and make attributions to defend their social positions (Edwards and Potter, 1993;

    Heritage, 1988).

    The present study investigated talk in the context of computer failure with the goal

    of investigating the social functions of such talk. It will come as no surprise that

    interactions in failure situations are oriented to the technical task of troubleshooting

    the problem and returning the system to functionality. However, it will be shown

    that the social situation requires careful management by participants and that

    these social concerns are just as important as the technical goals of interaction, if not

    more so.

    3. Method and data

    The study was performed with the assistance of the Information Technology

    department of a large university. The help-desk supported approximately 2500 computer

    users spread over four sites, although only one of these was sampled in this study. On the

    participating site approximately 800 academic, administrative, and student users were

    supported by five help-desk consultants (HDCs), one of whom was absent for the duration

    of the study. The other four (one male and three female) were willing participants in the

    study. All spoke English as a first language.Data were collected by shadowing HDCs on their rounds and tape-recording their

    interactions with users with the written consent of all participants. In 5 days of sampling,

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 1184

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    5/18

    16 separate interactions involving 21 different users were observed. Only one user did not

    agree to have her voice tape-recorded and, due to the fine-grained nature of analysis,

    this interaction has been dropped from the study. The tape-recorded data was transcribed

    in detail, resulting in a dataset of approximately 200 pages, or 87,000 words. For thesake of brevity, transcription conventions have been simplified in this paper unless

    the analysis calls for more detailed features to be reflected (see Appendix A for a

    notation key).

    3.1. Strengths and weaknesses of the data

    This dataset is immensely detailed, as will become apparent in the analysis. The

    recording and detailed transcription of everyday interactions allows a fine-grained

    analysis of this slice-of-life, and enough data was recorded to get a detailed snapshot of

    how participants may talk about failure in this arena. However, 5 days worth of

    interaction in a single sphere of activity does not provide a firm footing to make

    sweeping generalizations about how computer failure is always done. Nevertheless, even

    though this study cannot make universal claims about computer failure, it can show that

    failure is an important element of computer use that can be an intense focus of particular

    forms of social life.

    4. Analysis

    The following analysis makes fairly modest claims about computer failure. However,

    they are claims that substantially challenge the way that issues of reliability and failure are

    approached in the field of HCI. Briefly, the following points will be argued:

    1. Failure is a matter for negotiation rather than simply a self-evident technical fact.

    2. The negotiation of computer failure has other social consequences, particularly in

    terms of competence issues.

    3. Social issues may take precedence over technical ones.

    While it is impossible in this analysis to argue that computer failure will always occurin similar ways, it is enough to show that social and technical issues overlap to a significant

    extent.

    4.1. Negotiating failure

    Contrary to the expectation that computer failures should be obvious and technically

    self-evident, there were many instances where both the existence and the nature of a

    failure were the subject of negotiation between users and HDCs.

    Extract 1 demonstrates some of the difficulties of defining computer failure in an

    everyday social context. The interaction occurs between three protagonists, namely theuser (U5), a help-desk consultant (HDC1) and the observer (OBS). The user has

    called in the HDC to investigate his complaint that the computer is slow. Lines 14

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 5

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    6/18

    occur in the users absence and the remainder of the extract is recorded with the user

    present. For the sake of brevity, extraneous talk has been omitted (represented by

    asterisks).

    Extract 1

    The HDC, after running some tests, concludes that the machine is not slow (during

    and prior to lines 28) and later relays this conclusion to the user (line 8). The user

    disagrees, citing his personal experience (lines 1112) and calling on the opinion of

    another HDC (line 14) who not only said that the computer is slow, but that it is

    abnormally slow. This forces the HDC to review her diagnosis and agree that it

    was slow going into Windows even though general operation was fine (lines

    1617). She confirms the negotiated diagnosis by timing a reboot between lines 19

    and 24.The cause of the slow booting was later traced to a disconnected network cable that

    was delaying the booting process as the machine, configured to use Novell network

    services, unsuccessfully polled the network. In this case the user had not attempted to use

    network dependent applications and so the failure could be defined as not really serious

    (line 20). The same technical problem with a different user, or even with the same user at a

    different time would have resulted in an entirely different failure definition. For example,

    if the user had attempted to launch an email client the failure may have been defined as an

    email problem rather than slow booting.

    This extract demonstrates several important features of failure. Firstly, users define

    failure symptomatically rather than by cause. While this may seem obvious, since usersusually lack the specialized knowledge to diagnose technical problems, it contrasts with

    the way that failure is defined if it is understood as the flip-side of reliability. Secondly, the

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 1186

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    7/18

    symptoms by which users define failure are established through task performance. For

    example, if a computer has a faulty internet browser, the latent fault will not be

    experienced as a failure until the user attempts to browse the internet. Thirdly, the final

    definition of failure is a matter of social negotiation. Of course, there are cases in whichfailure is undeniable and not subject to debate at all, such as a catastrophic hard-drive

    crash. At the other end of the scale there are failures so minor that users ignore or work

    around them. In every case failure is most usefully defined subjectively as it impacts on the

    users experience of computer use.

    4.2. Social aspects of failure

    In the failure situations observed it was evident that the apparently technical process of

    troubleshooting was subject to social concerns.

    4.2.1. The sociality of troubleshooting

    Firstly, in real-life failure situations, there are few boundaries between troubleshooting

    and other types of talk. Take the following interaction as an example:

    Extract 2

    Lines 17 are a continuation of an extended stretch of dialogue about computer viruses,

    how the computer may have been infected and how to detect and remove them. Notice

    how naturally the conversation switches to the weather and then to the effect of dampness

    on hairstyles. Although diagnosing and repairing a virus infection requires technical

    expertise, the participants interaction is clearly not limited to technical matters.

    Participants weave together technical and social talk in a way that defies simple analysis.

    This should remind us that treating the social and technical elements of failure-relatedinteraction as if they are separate processes is an analytic convenience rather than a

    manifest reality.

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 7

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    8/18

    4.2.2. Different types of technical talk

    During analysis (Quayle, 2004) it emerged that the types of information exchanged in

    typical repair interactions can be roughly divided into two types. The first is socially safe

    operative information that can be freely exchanged. It usually concerns neutral, observableand physical symptoms and behaviours of equipment. The second is unsafe inspective

    information that concerns the characteristics and behaviour of participants and is

    approached by participants with caution.

    In general, operative talk offers no surprises. Information is freely exchanged by

    participants and it is used in unsurprising ways. Inspective talk, on the other hand, is very

    carefully managed. The following examples have been chosen to show how participants

    generally handle inspective talk with caution.

    4.2.2.1. Needing a refresher course. Extract 3 is part of an interaction in which a

    supervising user has reported that users of a shared computer are unable to write files to aspecific directory location. The computer has Windows 2000 installed, and at the time, the

    IT department was not yet officially supporting it. The problem was investigated with the

    user present and asking questions. This extract cuts into the exchange just after the HDC

    has successfully set up a new user account and modified the rights to solve the users

    reported problem.

    Extract 3

    The first sign of inspective talk appears in lines 57. The user has just asked the HDC to

    confirm that the other users of the computer will be able to access the internet (and is

    thereby asking the HDC to defend her actions). The HDCs response is not convincing,

    and the user tries to ask again. Notice how the repeated request is marked as both uncertain

    (by faltering talk) and certain, by the repeated use of in fact. However, the user is

    claiming certainty about his own experience with the computer, rather than in the realm ofthe current diagnosis (see Gill, 1998). This anecdotal construction is a way of

    simultaneously deferring to the HDCs position as an expert (and thereby being cautious

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 1188

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    9/18

    with inspective talk) and to his own position as a user who is familiar with the machine and

    has a right to have his questions answered.

    In the context of supporting a machine running an unfamiliar operating system

    (Windows 2000), the expert is in a difficult position. She is an expert in role, but not inknowledge. Notice how she marks this uncomfortable stance with phrases like I think

    (line 2), should (line 6) and shouldnt (line 11) to signal that she does not claim to

    have infallible knowledge about this operating system. Of course, one of the

    dispositional qualities expected of a computer expert is that they have extensive

    knowledge of computers. The HDC is very careful throughout the troubleshooting

    process to introduce information to specify that any apparent lack of knowledge or skill

    is confined to Windows 2000 by sporadically saying things like I havent worked on

    this for a while and Ive forgotten exactly where the users are. Then, in lines 1315,

    she says I need to put myself on a refresher course for 2000. The way she phrases thisdoes a lot of work on the types of dispositional attributions that the user can make.

    Firstly, by saying she needs to go on a course she admits that this is essential knowledge.

    Secondly, by saying she needs a refresher course she implies that she already has the

    required knowledge but that she could use some reminding. Thirdly, by saying she needs

    to put herself on a course she positions herself as someone who is intrinsically motivated

    to learn the necessary skills. Finally, she specifies that her ignorance is limited to

    Windows 2000 only.

    In the context of her role as an expert, the HDCs obvious lack of knowledge

    about Windows 2000 is an unsafe social liability and her talk is oriented towards

    defending her expertise. The users response is accommodating. He says that hewould like to do a basic course in 2000 (line 17). This is another utterance that does

    a lot of attributional work. By saying that he would like to do a course he positions

    himself as someone who, like the HDC, is eager to learn but as someone who does

    not need to. His desire to do a basic course is most informative in relation to the

    HDCs need for a refresher course to remember what she has forgotten. In contrast,

    he is positioning himself as someone who has never known, thereby affirming the

    HDCs expertise. By the end of the interaction the user is positioned as someone who

    is ignorant but eager to learn, and the HDC is positioned as someone whose memory

    may be hazy but is nevertheless committed to reacquiring the necessary knowledge

    and skills.

    In this extract, the user and HDC edge very carefully around unsafe information

    that the HDC is unfamiliar with the operating systembut do so cooperatively in the

    context of a successful outcome. The HDC actively takes on and defends her role as

    an expert, and the user cooperatively embraces his role as a supplicant to her higher

    knowledge.

    4.2.2.2. Youve just got to be very careful. The next example is not as cooperative.

    Extract 4 is situated in a search for a known virus. The infection has corrupted the

    installation of MS Office, which needed to be reinstalled. We join the interaction whilethe user and HDC are discussing the procedure of removing the virus and restoring the

    computer to functionality.

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 9

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    10/18

    Extract 4

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 11810

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    11/18

    At the start of this extract, the participants seem to be discussing purely technical

    aspects of the failure situation (lines 19). However, when the user asks a seemingly

    technical question about the source of the virus the HDC responds by implying: (1)

    that the user was not careful enough (line 11), (2) that the user failed to read and

    respond to relevant notices issued by the HDCs department (lines 1130), (3) that the

    user should save attachments to disk and scan them manually before viewing them

    (lines 3241), and (4) that the user should delete strange email in future (lines 41

    42). If computer use was a purely technical affair, these instructions would be

    unproblematic. However, there are several signs of uneasiness between participants in

    this exchange. Firstly, everyday conversation between equals is marked by regular

    turn-taking behaviour and long monologues are unusual. However, institutional talk is

    marked by role-related institutional asymmetries (see Drew and Heritage, 1992),

    which provides the basis for the HDCs parent-like admonition. However, there are

    signs that the HDC is treading carefully. Firstly, her speech is marked as uncertain or

    tentative by the many elongations (signified by colons, e.g. elonga:::tion) and pauses.More importantly, the HDC often switches between first- and third-person

    grammatical constructions in quite a revealing way. She begins in the first person,

    saying youve just got to be careful (line 11, italics added) and you know all those

    notices that we send out (lines 1112, italics added). However, she seamlessly

    switches to third-person construction, saying that they send out those notices (lines

    1516) but people.dont realize (lines 1619). This has the effect of distancing

    both the authoritative action of sending out notices and the irresponsible response of

    ignoring them. However, the HDC reveals that she using third-person grammar as a

    distancing device when she slips between the two in a single sentence, saying

    people. just go and open it and then it infects your machine (lines 1620). Clearlyshe is not talking about other peoples irresponsible actions at all, but those of the

    user. She further attempts to ameliorate the potentially injurious nature of her talk by

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 11

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    12/18

    speculating that the reasons for such irresponsible behaviour in general (and therefore

    the users behaviour in this case) may be ignorant negligence (they dont realize,

    line 19) rather than negligence pure and simple. By her efforts to ameliorate her

    criticism we can see that the HDC is oriented to the socially threatening nature of hertalk.

    Remember that, on a technical level, the HDC was trying to give a simple piece of

    advice: to save attachments and scan them manually to avoid virus infections.

    However, we have shown that she was also oriented to deeper social implications of

    this simple technical statement. The user, who had so far been relatively silent,

    reacted to the HDCs advice by innocuously asking why the virus was not detected

    by the antivirus software (lines 4648). The HDC initially tried to blame the users

    version of the antivirus software, but then realized that the software was current (line

    50). She then made an argument that the virus infection occurred before the latest

    update occurred (lines 5460), but in the process was forced to admit that the update

    may not have been provided timeously by the IT department. However, in order to

    make this argument, she was forced to admit that the antivirus software should

    automatically pick up and disinfect the latest viruses. This obviously undermines her

    previous position that the user was negligent (in lines 1142). The users seemingly

    innocent question in lines 4648 forced the HDC to shift the blame away from the

    user towards the IT department.

    The important thing to notice here is the way that the participants requested, exchanged

    and disputed inspective information. Talk about the technical problem was not neutral: it

    had social implications for both the user and the HDC. Notice how the types ofinformation offered by the user and the HDC, respectively tugged at the types of

    attributions that could be made and, more importantly, how each of these arguments offers

    a very different basis for determining responsibility for the virus infection. The user was

    working towards an attribution to a failure of the antivirus software while the HDC was

    leaning towards an attributional model that would locate responsibility for avoiding

    viruses with the user. Although the dialogue, at face value, was about technical issues, the

    way that participants oriented themselves was strongly oriented to the defence of social

    positions.

    4.3. Social issues may take precedence over technical ones

    In the previous extract, effort could have been spared if the failure could have

    been discussed in purely technical terms. This would have involved both the user

    and the HDC accepting that the anti-virus software is not always reliable and that

    best practice would be to save attachments and run a manual scan before

    opening them. However, this would have forced the user to admit a certain

    amount of negligence and the HDC to admit that the antivirus software supplied by

    the IT department may not always be effective. The participants dialogical strugglesreveal the importance of social issues in the apparently technical task of computer

    repair.

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 11812

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    13/18

    Another example of this can be seen in the following extract. This interaction takes

    place in a lengthy engagement where the HDC was called in to repair a faulty disk drive

    and was unable to find the cause of the problem.

    By this point in the engagement the HDC finds herself in a dilemma. Although she is

    the expert, she has had no success in resolving the reported fault. She is left with task of

    defending her social standing without having resolved the problem when the user offersher an escape hatch by suggesting Its having a bad day (line 4). This explanation, if it

    were mutually accepted, would offer both the user and the HDC a chance to amicably

    end the engagement. However, it would also imply that the failure was due to factors that

    were both random and unfathomable. This explanation personifies the machine and

    endows it with moodsstates of being that are unaccountable and de-coupled from

    cause and effect linkages. In other words, the user offered an explanation that depended

    on mutual acceptance that a rational attribution was not possible in that case. Instead of

    accepting this non-rational account, the HDC rather explains the actions she has taken to

    test various possible causes that could result in this effect (lines 712). She offers the

    alternative that they will just have to shrug [their] shoulders and say [they] dont knowwhat caused it (line 15). She refuses to accept the users suggestion that cause-and-effect

    have somehow broken down and instead constructs the problem as something that could

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 13

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    14/18

    be understood if enough information were available. This reconstruction of the situation

    does not release her from the engagement (as the users suggestion would have), but it

    does protect her position as someone who could solve the problem if she only had

    enough data.However, she goes further and offers the alternative explanation, bolstered by her

    description of the troubleshooting process she has undertaken, that the problem is gone now

    (line 18). This powerfully defends her expert status since, if the problem no longer exists, it is

    reasonable that even an expert would be unable to detect it. However, it does not release her

    from the social engagement as accepting theusers explanationwould have, becauseit implies

    that the problem could recur and it is her responsibility to discover the circumstances under

    which it could do so. Even though the HDC attempted to terminate the engagement at this

    point, the user did not validate her efforts to leave and she ended up working on this problem

    for a further 30 min without success before she finally managed to secure the users release

    from the social contract.In both of these extracts, participants negotiate the cause of a failure with more respect

    for their own social positions than for technical and practical aspects of the

    troubleshooting engagement. In Extract 4, the user and the HDC were sparring

    to determine the responsibility for the failure. In Extract 5, the HDC rejected a reasonable

    but irrational explanation at the expense of failing to terminate the interaction and

    consequently expended a great deal additional time and effort investigating the problem.

    5. Conclusion and implications

    This analysis shows, firstly, that failure is a process that is engaged in by users. It cannot be

    considered a pause or hiatus in which they do other things while their computer is repaired.

    Although many problems may be solved by users on their own, this analysis (in line with

    previous research) shows that computer failure often becomes the background for a type of

    social life related to returning the computer to functionality. This form of social life is

    structured around technical concerns but, at the same time, is oriented to important social

    issues such as the allocation of blame and the defence of social standing. Since computer

    failure can become a social event it would be inappropriate to view it as merely a technical

    concern (as is so often the case in knowledge-management and AI approaches to help-deskmanagement). This engagement with failure has three important characteristics that may be

    interesting to HCI theorists and practitioners:

    Firstly, failure is often defined through negotiation rather than being technically self-

    evident. Users and technicians negotiate the symptoms, what these symptoms might mean,

    and whether such symptoms require attention or not. Although there may be an objective

    technical fault at the root of a reported failure, the failure should be understood as a subjective

    and negotiable phenomenon related to the usefulness of the computer to the user (see also

    Murphy and Levidow, 2000; Siewiorek et al., 2004). For example, a single technical problem,

    such as a networkcable being disconnected, could result in the problem of slow booting fora

    user who only uses a word processor, or in a catastrophic failure for a user dependent onnetworked applicationsthe experience of a technical problem as a failure depends on the

    users goals and needs. In other cases there was no such objective basis for a reported fault,

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 11814

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    15/18

    and the nature (and even the existence) of the failure was a matter decided on through social

    interaction and debate.

    Secondly, such negotiations of failure have important social consequences. Users may

    be blamed or held accountable for failure, even when they are victims of circumstance.At the same time, once HDCs acknowledge a users description of circumstances as a

    failure they become socially responsible for solving it by virtue of their institutional

    roles and responsibilities. Furthermore, maintaining their position of expertise requires

    that they avoid blame for the problem and provide convincing evidence of progress

    towards resolving the failure. The social situation of failure is carefully managed by

    participants, and talk about failure is carefully constructed to act on the social fabric. The

    user and HDC are acting from within their respective social positions, and their

    interactions are oriented to living-out and defending these. This is done in large part by

    the active use of talk to construct and defend particular social positions within the failure

    situation. Technical and social concerns are not distinct, and may be addressedsimultaneouslyoften by the same utterances.

    Thirdly, social issues may (often) take precedence over technical ones. Users and

    HDCs may resist definitions of failure that challenge their social standing in any

    way. Both users and HDCs may be willing to sacrifice a great deal of time and effort

    (and sometimes forgo a solution) to avoid definitions of failure that are potentially injurious.

    The close attention that users need to pay to social issues (such as competence) in interactions

    with professionalcomputer support staff may partially explain the common tendencyreported

    in the literature for users to invoke official support channels only as a last resort.

    Although it is possible that there might be gender differences in the way that male and

    female HDCs approach the social situation of failure (and the present study drew

    primarily on data from female HDCs), the findings are sufficiently in line with prior

    research (cf. Barley, 1996; Das, 2003; Pentland, 1992) to suggest that the interactional

    features of technical support in failure situations are relatively universal.

    Since, failure can be an intensely social phenomenon it cannot be adequately

    considered to be a technical affair related simply to reliability. It is increasingly clear that,

    particularly in the low-cost and complex systems used by average users, improving

    reliability will not eradicate failure for the foreseeable future. (And, in any case, a user can

    perceive a failure anytime a system fails to respond as the user expects, even if the system

    itself is behaving according to specifications.) Therefore, an understanding of the situationof failure would be an important addition to a holistic understanding of the psychology and

    sociology of humancomputer interaction.

    A user-centred computing philosophy should extend to all regular aspects of

    computer-use. An understanding of the experience of failure, and particularly of its

    social constraints, should inform the design of failure-support systems. In line with

    user-centred design philosophies, user-support systems should be designed to minimise

    both downtime, with its associated losses in productivity, and the negative social and

    emotional consequences for the user. As such, the management of computer failure

    may be an important new frontier in the design of interfaces and computer support

    systems.Given that technical support is often such a social process, user-consultants and

    technicians should be competent socially as well as technically. The experience of failure

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 15

    ARTICLE IN PRESS

  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    16/18

    for a user may involve negative elements such as blame and shame that, in general

    circumstances, require social defence. A user-centred philosophy of failure-support should

    provide social and technical ways of scaffolding this experience.

    Appendix A. Notation conventions

    Notation conventions for all extracts

    U1, U2 etc. A user

    HDC: A help-desk consultant

    OBS: The observer

    Underline Emphasis

    . PauseHuhuh Laughter

    (U1: Ya?) A short interjection by another speaker

    CAPITALS Indicate that speech is LOUDER

    (inaudible) Marks inaudible speech

    (Probably) The probable transcription of hard-to-hear speech

    ((Comment)) Transcribers comments and additional explanatory information.

    *** A short section of transcript is omitted

    The detailed notation of Extract 4 records the following additional features

    (.) Short pause[2 s] Timed pause, in seconds

    / Marks a stutter or word correction without a pause, e.g.

    he/he/help

    Ye:s Elongated sound. Two or three colons for very long elongations,

    e.g. Ye:::s.

    words-with-hyphens Rapid speech

    References

    Abraham, D.M., Spangler, W.E., May, J.H., 1991. Expertechissues in the design and development of an

    intelligent help-desk system. Expert Systems with Applications 2 (4), 305319.

    Antaki, C., 1994. Explaining and Arguing: the Social Organization of Accounts. Sage, London.

    Bannon, L., 1986. Helping users help each other. In: Norman, D.A., Draper, S.W. (Eds.), User Centred System

    Design. Lawrence Erlbaum Associates, Hillsdale, NJ.

    Barley, S.R., 1996. Technicians in the workplace: ethnographic evidence for bringing work into organization

    studies. Administrative Science Quarterly 41, 404441.

    Bharati, P., Berg, D., 2003. Managing information systems for service quality: a study from the other side.

    Information Technology and People 16 (2), 183202.

    BBC News, 2003a. Keep cool over computer hassles [On-line]. Retrieved 10-23-2003, Available from: news.

    bbc.co.uk/go/pr/fr/-/1/hi/technology/3204719.stm

    BBC News, 2003b. Odd mishaps cause computer grief [On-line]. Retrieved 10-23-2003, Available from: news.

    bbc.co.uk/go/pr/fr/-/1/hi/technology/3193366.stm

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 11816

    ARTICLE IN PRESS

    http://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3204719.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3204719.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3193366.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3193366.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3193366.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3193366.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3204719.stmhttp://news.bbc.co.uk/go/pr/fr/-/1/hi/technology/3204719.stm
  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    17/18

    Carroll, J.M., 1997. Humancomputer interaction: psychology as a science of design. International Journal of

    Human-Computer Studies 46, 501522.

    Das, A., 2003. Knowledge and productivity in technical support work. Management Science 49 (4), 416431.

    Drew, P., Heritage, J., 1992. Analyzing talk at work. In: Drew, P., Heritage, J. (Eds.), Talk at work: Interaction in

    Institutional Settings. Cambridge University Press, Cambridge, pp. 165.

    Dvorak, J.C., 2003. Magic number: 30 billion. PC Magazine [On-line]. Retrieved 8-12-2003, Available from:

    www.pcmag.com/author_bio/0,3055,aZ123,00.asp

    Edwards, D., Potter, J., 1993. Language and causation: a discursive action model of description and attribution.

    Psychological Review 100, 2341.

    Enfield, R.L., 1987. The limits of software reliability. Behaviour and Information Technology 1987, 3643.

    Eveland, J.D., Blanchard, A., Brown, W., Mattocks, J., 1994. The role of help networks in facilitating use of

    CSCW tools, Proceedings of the 1994 ACM Conference on Computer Supported Cooperative Work, Chapel

    Hill, North Carolina, United States. ACM Press, New York, NY, USA.

    Flew, S.R., Kneath, T.H., 1994. Providing quality user support. Journal of Petroleum Technology 46 (6), 484485.

    Gergen, K.J., Gergen, M., 1980. Causal attribution in the context of social explanation. In: Gorlitz, D. (Ed.),

    Perspectives on Attribution Research and Theory: The Bielefeld Symposium. Ballinger, Cambridge, MA,pp. 195217.

    Gill, V.T., 1998. Doing attributions in medical interaction: patients explanations for illness and doctors

    responses. Social Psychology Quarterly 61, 342360.

    Goffman, E., 1959. The Presentation of Self in Everyday Life. Doubleday, New York.

    Goffman, E., 1961. Encounters. Bobbs-Merril, Indianapolis, IN.

    Goffman, E., 1963. Stigma. Prentice-Hall, Englewood Cliffs, New Jersey.

    Gonzalez, L.M., Giachetti, R.E., Ramirez, G., 2005. Knowledge management-centric help-desk: specifications

    and performance evaluation. Decision Support Systems 40 (2), 389405.

    Heritage, J., 1988. Explanations as accounts: a conversation analytic perspective. In: Antaki, C. (Ed.), Analysing

    everyday explanation: a casebook of methods. Sage, London, pp. 127144.

    Jones, M., 2000. PC Reliability & Service: things fall apart. www.pcworld.com/resource/printable/article/

    o,aid,16808,00.asp [On-line].

    Kiesler, S., Zdaniuk, B., Lundmark, V., Kraut, R.E., 2000. Troubles with the internet: the dynamics of help at

    home. Human-Computer Interaction 15 (4), 323351.

    Kriegsman, M., Barletta, R., 1993. Building a case-based help-desk application. IEEE Expert-Intelligent Systems

    & Their Applications 8 (6), 1826.

    Lee, D.M.S., 1986. Usage pattern and sources of assistance for personal computer users. MIS Quarterly 10 (4),

    313325.

    Lewis, C., Norman, D.A., 1995. Designing for error. In: Baecker, R.M., Grudin, J., Buxton, W.A.S.,

    Greenberg, S. (Eds.), readings in Humancomputer Interaction: Toward the Year 2000, second ed. Morgan

    Kaufman, San Francisco, CA, pp. 686697.

    Lieberman, H., Fry, C., 2001. Will software ever work? Communications of the ACM 44, 122124.

    Lundgren, T.D., 1998. End-user support. Journal of Computer Information Systems 39 (1), 6064.MacLean, A., Carter, K., Lovstrand, L., Moran, T., 1990. User-tailorable systems: Pressing the issues with

    buttons. In: Proceedings, CHI90, Seattle, 15 April. pp. 175182.

    Marom, Y., Zukerman, I., 2005. Analysis and synthesis of help-desk responses. Lecture Notes in Computer

    Science 3683, 890897.

    Murphy, B., Levidow, B., 2000. Windows 2000 dependability, Proceedings of IEEE International Conference on

    Dependable Systems and Networks. IEEE, New York, NY.

    Neumann, P.G., 1993. Are dependable systems feasible? Communications of the ACM 36, 146.

    Nickerson, R.S., 1986. Using computers: human factors in information systems. MIT Press, Cambridge, MA.

    Norman, D.A., 1990. Commentary: human error and the design of computer systems. Communications of the

    ACM 33, 681683.

    Pentland, B.T., 1992. Organizing moves in software support hot lines. Administrative Science Quarterly 37,

    527548.

    Quayle, M., 2004. When the chips are down: attribution in the context of computer failure and repair.

    Unpublished Thesis, University of KwaZulu-Natal, Pietermaritzburg.

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 118 17

    ARTICLE IN PRESS

    http://www.pcmag.com/author_bio/0,3055,a=123,00.asphttp://www.pcmag.com/author_bio/0,3055,a=123,00.asphttp://www.pcmag.com/author_bio/0,3055,a=123,00.asphttp://http//www.pcworld.com/resource/printable/article/o,aid,16808,00.asphttp://http//www.pcworld.com/resource/printable/article/o,aid,16808,00.asphttp://http//www.pcworld.com/resource/printable/article/o,aid,16808,00.asphttp://http//www.pcworld.com/resource/printable/article/o,aid,16808,00.asphttp://www.pcmag.com/author_bio/0,3055,a=123,00.asp
  • 8/14/2019 When the chips are own: Social and technical aspects of computer failure and repair

    18/18

    Rimmer, J., Wakeman, I., 1999. Who helps the helpers: technological change in the help-desk. In: Buckner, K.

    (Ed.), Ethnographic Studies in Real and Virtual Environments: Inhabited Information Spaces and Connected

    Communities. Edinburgh University, pp. 7080 (2426 January 1999).

    Sauter, V.L., 2004. An exploratory analysis of the need for user-acquainted diagnostic support systems.

    International Journal of Information Technology & Decision Making 3 (3), 471491.

    Selber, S.A., Johnson-Eilola, J., Mehlenbacher, B., 1996. Online support systems. ACM Computing Surveys

    (CSUR) 28 (1), 197200.

    Shah, V., Bandi, R.K., 2003. Capability development in knowledge intensive IT enabled services. European

    Journal of Work and Organizational Psychology 12 (4), 418427.

    Siewiorek, D.P., Chillarege, R., Kalbarczyk, Z.T., 2004. Reflections on industry trends and experimental research

    in dependability. IEEE Transactions on Dependable and Secure Computing 1 (2), 109127.

    Simache, C., Kaaniche, M., Saidane, A., 2002. Event log based dependability analysis of Windows NT and 2K

    systems. Proceedings of the 2002 Pacific Rim International Symposium on Dependable Computing (PRDC)

    2002, 311315.

    M. Quayle, K. Durrheim / Interacting with Computers xx (2006) 11818

    ARTICLE IN PRESS