Top Banner
Reliability assessment of the attitude towards computers instrument (ATCI) Teresa M. Shaft * , Mark P. Sharfman, Wilfred W. Wu Michael F. Price College of Business, 307 W. Brooks, University of Oklahoma, Norman, OK 73019-4006, USA Available online 18 November 2003 Abstract Individual’s attitude towards computers is a key component to understanding user’s accep- tance and satisfaction with computer-based information systems. As such, individuals’ attitudes towards computers have been of interest to researchers in a variety of settings for sometime. Therefore, numerous instruments have been developed to assess this construct. We describe these instruments and discuss issues for researchers to consider when selecting an instrument with which to assess attitude towards computers. We find few instruments that are suitable for a general setting or have had their reliability thoroughly assessed. We, therefore, present a reli- ability assessment of the Attitude Towards Computers Instrument (ATCI), which was designed to be applicable in a wide variety of settings. The reliability assessment includes latent structure (confirmatory factor) analysis, internal consistency (Cronbach’s alpha) analysis and stability (test–retest) analysis. We find that the reliability of the ATCI compares favorably with existing instruments, making it a better choice for many research settings. Reliability analysis such as presented in this paper helps move the information systems field forward by providing researchers with a reliable instrument with which to assess attitude towards computers. Ó 2003 Elsevier Ltd. All rights reserved. Keywords: Computer attitudes; Attitude measures; Attitude measurement; Computers 1. Introduction Computer-based information systems have become an integral part of managerial decision making (cf. Galletta & Lederer, 1989). Despite the widespread availability * Corresponding author. Tel.: +1-405-325-2880; fax: +1-405-325-7482. E-mail address: [email protected] (T.M. Shaft). 0747-5632/$ - see front matter Ó 2003 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2003.10.021 www.elsevier.com/locate/comphumbeh Computers in Human Behavior 20 (2004) 661–689 Computers in Human Behavior
29

Reliability assessment of the attitude towards computers instrument (ATCI)

Mar 08, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Reliability assessment of the attitude towards computers instrument (ATCI)

omputers inuman Behavior

CH

www.elsevier.com/locate/comphumbeh

Computers in Human Behavior 20 (2004) 661–689

Reliability assessment of the attitudetowards computers instrument (ATCI)

Teresa M. Shaft *, Mark P. Sharfman, Wilfred W. Wu

Michael F. Price College of Business, 307 W. Brooks, University of Oklahoma, Norman,

OK 73019-4006, USA

Available online 18 November 2003

Abstract

Individual’s attitude towards computers is a key component to understanding user’s accep-

tance and satisfaction with computer-based information systems. As such, individuals’ attitudes

towards computers have been of interest to researchers in a variety of settings for sometime.

Therefore, numerous instruments have been developed to assess this construct. We describe

these instruments and discuss issues for researchers to consider when selecting an instrument

with which to assess attitude towards computers. We find few instruments that are suitable for a

general setting or have had their reliability thoroughly assessed. We, therefore, present a reli-

ability assessment of the Attitude Towards Computers Instrument (ATCI), which was designed

to be applicable in a wide variety of settings. The reliability assessment includes latent structure

(confirmatory factor) analysis, internal consistency (Cronbach’s alpha) analysis and stability

(test–retest) analysis. We find that the reliability of the ATCI compares favorably with existing

instruments, making it a better choice for many research settings. Reliability analysis such as

presented in this paper helps move the information systems field forward by providing

researchers with a reliable instrument with which to assess attitude towards computers.

� 2003 Elsevier Ltd. All rights reserved.

Keywords: Computer attitudes; Attitude measures; Attitude measurement; Computers

1. Introduction

Computer-based information systems have become an integral part of managerial

decision making (cf. Galletta & Lederer, 1989). Despite the widespread availability

*Corresponding author. Tel.: +1-405-325-2880; fax: +1-405-325-7482.

E-mail address: [email protected] (T.M. Shaft).

0747-5632/$ - see front matter � 2003 Elsevier Ltd. All rights reserved.

doi:10.1016/j.chb.2003.10.021

Page 2: Reliability assessment of the attitude towards computers instrument (ATCI)

662 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

of information systems, many organizations do not gain the full benefit of their

systems because some individuals resist using them. In fact, ‘‘(u)nderstanding why

people accept or reject computers has proven to be one of the most challenging issues

in information systems research’’ (Davis, Bagozzi, & Warshaw, 1989, p. 982). Pre-

vious research suggests that attitudes play a key role in predicting user acceptance

and satisfaction (Bailey & Person, 1983; Coffin & MacIntyre, 1999; Ives, Olson, &Baroudi, 1983). In response to this challenge, a large body of research has been

conducted to determine the effects of attitudes and beliefs on individuals’ use of

computers (e.g. DeSanctis, 1983; Fuerst & Cheney, 1982; Lucas, 1975; Swanson,

1982). As such, attitudes towards computer have been used to predict constructs as

varied as satisfaction with end-user computing (e.g. Rivard & Huff, 1988) and the

effect of implementing IS on organizational power distributions (Burkhardt & Brass,

1990). However, Davis et al. (1989, p. 983) suggested that these research findings

have been mixed and inconclusive. In part this may be due to the wide array ofdifferent attitude, belief and satisfaction measures which have been employed, often

without adequate theoretical or psychometric justification.

This paper takes a step towards developing theoretically and psychometrically

justified instruments by first considering the many attitude towards computers in-

struments that exist and their psychometric properties. We then present a reliability

analysis of the Attitudes Towards Computers Instrument (ATCI). The ATCI differs

from many related instruments. Specifically, it was developed to be applicable to a

broad range of populations, including business professionals, and to be relativelyshort, such that it can be used in studies that require the assessment of numerous

constructs.

We organize the remainder of the paper as follows. First, we describe the theo-

retical relationship between attitude and behavior and distinguish attitude toward

computers from related constructs. Then, we present an overview of computer at-

titude instruments and related reliability studies. We then present evidence of the

ATCI’s reliability through latent structure (confirmatory factor) analysis, internal

consistency (Cronbach’s alpha) analysis and stability (test–retest) studies. We pres-ent the purpose, methods, analytic techniques and results for each analysis separately.

2. The theoretical relationship between attitude and behavior

Studying Management Information Systems (MIS) users’ attitudes was basic to

the creation of ‘‘a theory of MIS development in which MIS success and failure is

explained’’ (Swanson, 1982, p. 157). Attitudes provide people with a framework tointerpret the world and integrate new experiences (Galletta & Lederer, 1989). Ajzen

and Fishbein argue that by understanding an individual’s attitudes towards some-

thing, one can predict that individual’s ‘‘overall pattern of responses to the object’’

(1977: 888) as new experiences occur. Further, when there is a clear linkage between

the target action and any attitudes that are formed, the degree of predictability will

be highest (Ajzen & Fishbein, 1977). Given the pervasiveness of computers in soci-

ety, it is likely that most individuals have developed some attitudes towards these

Page 3: Reliability assessment of the attitude towards computers instrument (ATCI)

Fig. 1. The relationships among attitudes, intentions, and behaviors, developed from Fishbein and Ajzen

(1975).

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 663

machines. As such, intentions concerning computer use should also be well devel-oped. Therefore, consistent with Ajzen and Fishbein (1977), if we understand atti-

tudes towards computers, we should be able to predict computer-related behaviors

and choices. To predict these behaviors and decisions with assurance requires a re-

liable attitudes towards computers instrument.

As we show in Fig. 1, an attitude can be exogenous to intention and (indirectly)

exogenous to behavior or endogenous from behavior. In researching computer users’

attitudes, investigators can use attitude as either an independent variable that pre-

dicts computer-related behaviors (see, for example, Burkhardt & Brass, 1990) or as adependent variable by which the researchers examine the antecedents of users’ at-

titudes (see, for example, Burkhardt, 1994; or Igbaria & Parasuraman, 1989). Be-

cause of the variety of ways that attitudes towards computers can be used as a

construct within information systems research, a theoretically and psychometrically

justifiable measure of attitudes towards computers has the potential to be useful in a

variety of research settings.

3. Distinction between attitude towards computers and related concepts

Although we focus on instruments that assess attitudes towards computers, there

are three related concepts commonly found in the IS literature: computer anxiety,

user information satisfaction (UIS), and the technology acceptance model (TAM).

We consider attitudes towards computers distinct from each of the other three and

our reasoning follows. Of the other three constructs, perhaps the most closely related

construct is computer anxiety. In the past, computer anxiety and attitudes towardscomputers were seen as synonymous (i.e. an individual who experiences high levels of

computer anxiety is said to have a negative attitudes towards computers) or as

separate variables with common antecedents. However, evidence suggests that

computer anxiety is an intervening variable between other variables such as

Page 4: Reliability assessment of the attitude towards computers instrument (ATCI)

664 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

demographics and attitudes towards computers (Igbaria & Parasuraman, 1989).

Therefore, it appears that computer anxiety and attitude towards computers are

distinct constructs.

Regarding UIS, such instruments measure the success or failure of a particular IS

(Galletta & Lederer, 1989). Although attitudes and satisfaction are related (Galletta

& Lederer, 1989), instruments to assess UIS and attitudes towards computers op-erate at different levels. The focus when measuring attitude towards computers is on

a relatively stable individual characteristic or trait, irrespective of a specific system.

UIS instruments focus on a referent, a particular IS or a specific program and is

considered a state rather than a trait variable (Galletta & Lederer, 1989). Although

an interplay between attitudes towards computers and UIS exists (Galletta & Le-

derer, 1989), the focus on individual’s attitudes towards computers in general and as

a trait rather than a state distinguishes it from UIS.

TAM (Davis, 1989) is an important theory that has been widely investigated andapplied in the IS literature. TAM builds upon the Ajzen and Fishbein (1977) Theory

of Reasoned Action. In TAM, an individual’s perceived usefulness and perceived

ease of use of a particular information system influences their attitude toward using

that system, which affects their intention to use the system and, in turn, their actual

use of the information system. However, similar to user information satisfaction,

TAM focuses on an individual’s acceptance of a particular IS rather than assessing a

general trait. Hence, we regard TAM as distinct from attitude toward computers.

These four concepts (attitude, anxiety, information satisfaction and technologyacceptance) are important concepts in understanding individual’s use of computer

systems. We focus on attitude towards computers because it has been conceptualized

as a trait variable rather than state variable relevant to a specific IS. As such, an

instrument to assess attitude towards computers has application to many research

settings. In addition, despite the importance of attitude towards computers and its

long history of use as a research construct, as we discuss in the next section, psy-

chometric justification of such instruments has been limited. Given the wide-spread

use of this construct, it appears that an instrument whose reliability has been rig-orously confirmed could benefit many researchers.

4. Previous research on attitudes towards computers

Table 1 provides an overview of the many instruments developed to assess atti-

tude towards computers. Within Table 1, we present the instruments in chrono-

logical order based on the publication date of the original instrument. In subsequentdiscussion, we refer to each instrument by number, as indicated in the column la-

beled ‘‘entry.’’ The entry identifier is based on the original instrument. We organized

the table such that follow up studies of an instrument appear directly below the entry

for the original instrument. In developing Table 1, we focused on reports of new

instruments or studies that investigated the psychometric properties of an instru-

ment. Therefore, studies that utilized an existing instrument that did not include

assessment of the instrument’s psychometric properties we excluded from our review.

Page 5: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 1

A historical summary of computer attitude scales

Entry Scale Author # of

Items

Target

population

Sample for

reliability

analysis

Item

format

Factor

analysis

Internal

consistency

Test–

retest

1 Nationwide Survey Lee (1971) 20 General

Population

Diverse General

Public

Likert Exploratory

(Varimax)

a¼ 0.77–

0.79

n.a.

Re-evaluation Belleau and

Summers (1993)

20 General

Population

Undergraduate

students

Likert Exploratory

(Varimax)

a¼ 0.63 n.a.

2 Computer Survey Stevens (1980) 11 Educators Student Teachers Likert n.a. n.a. n.a.

Re-evaluation Woodrow (1991) 11 Educators Student Teachers Likert Exploratory

(Varimax)

r¼ 0.56

(split-half)

n.a.

3 Attitudes about

Computers

Zoltan and

Chapanis (1982)

64 Professionals CPA’s, lawyers,

pharmacists,

MD’s

Likert &

Semantic

Differential

Exploratory

(Principal)

n.a. n.a.

4 Attitudes Toward

Computers

Reece and Gable

(1982)

10 Students 7–8th Graders Likert Exploratory

(Oblique &

Varimax)

a¼ 0.87 n.a.

Re-evaluation Woodrow (1991) 10 Students Student Teachers Likert Exploratory

(Varimax)

r¼ 0.87

(split-half)

n.a.

5 Beliefs About

Computers

Ellsworth and

Bowman (1982)

17 Students Intro. Bio

Students

Likert n.a. a¼ 0.77 1-month

r¼ 0.85

6 Computer Use

Questionnaire

Griswold (1983) 20 Educators Student Teachers Semantic

Differential

n.a. a¼ 0.75 n.a.

Re-evaluation Woodrow (1991) 20 Educators Student Teachers Likert Exploratory

(Varimax)

r¼ 0.66

(split-half)

n.a.

7 Cybernetics Attiude

Scale

Wagman (1983) 100 General

Population

College Students Likert Exploratory

(Oblique)

n.a. n.a.

8 Computer Attitude

Scale (CAS)

Loyd and

Gressard (1984)

30 Students 8–12th Graders Likert Exploratory

(Principal &

Varimax)

a¼ 0.86–

0.95

n.a.

Redesign Loyd and Loyd

(1985)

40 Educators Educators Likert Exploratory

(Principal &

Varimax)

a¼ 0.82–

0.90

n.a.

T.M

.Shaftet

al./Computers

inHumanBehavio

r20(2004)661–689

665

Page 6: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 1 (continued)

Entry Scale Author # of

Items

Target

population

Sample for

reliability

analysis

Item

format

Factor

analysis

Internal

consistency

Test–

retest

Re-use &

Re-evaluation

Massoud (1990) 30 Low-literate

students

Students in GED

Class

Likert Exploratory

(Varimax)

a¼ 0.75–

0.91

n.a.

Redesign Bandalos and

Benson (1990)

23 Students Graduate &

Undergraduate

Students

Likert Exploratory

& Confirma-

tory

a¼ 0.86–

0.91

n.a.

Re-evaluation Woodrow (1991) 30 Students Student Teachers Likert Exploratory

(Varimax)

r¼ 0.94

(split-half)

n.a.

Re-use &

Re-evaluation

Nash and Moroz

(1997)

40 Educators Certified

Teachers

Likert Confirma-

tory

(Varimax)

a¼ 0.86 n.a.

9 Computer Attitude

Scale

Collis (1984) 24 Students Secondary

Students

Likert Exploratory n.a. n.a.

10 Attitude Towards

MIS (ATMIS)

Kjerulff and

Counte (1984)

20 Students Hospital Staff

Volunteers

Likert n.a. a¼ 0.89 n.a.

Attitude Toward

Computers in

General (ACG)

Kjerulff and

Counte (1984)

20 Students Hospital Staff

Volunteers

Semantic

Differential

n.a. a¼ 0.85 n.a.

11 Computer Attitude

Scale (CATT)

Dambrot,

Watkins-Malek,

Silling, Marshall,

& Garver (1985)

20 Students College

Freshman

Likert n.a. a¼ 0.79–

0.84

n.a.

12 Cognitive &

Affective computer

attitudes

Bannon et al.

(1985)

17 Educators &

Students

Students &

Educators

Likert Exploratory

(Varimax)

a¼ 0.90–

0.93

n.a.

13 Attitude Toward

CAI

Allen (1986) 21 Students

(professional:

nursing, EE,

etc.)

Nursing

Students

Semantic

Differential

Exploratory

(Varimax)

a¼ 0.58–

0.83

n.a.

14 Computer Attitude

Scale (CAS)

Nickell and

Pinto (1986)

20 General

Population

Students Likert n.a. a¼ 0.81 2-weeks

r¼ 0.86

666

T.M

.Shaftet

al./Computers

inHumanBehavio

r20(2004)661–689

Page 7: Reliability assessment of the attitude towards computers instrument (ATCI)

15 Computer Attitude

scale

Abdel-Gaid,

Trueblood, and

Shrigley (1986)

23 Teachers Secondary

Teachers

Semantic

Differential

Exploratory a¼ 0.89 n.a.

16 Attitudes Toward

Computer Usage

Scale (ATCUS)

Popovich, Hyde,

and Zakrajsek

(1987)

20 General

Population

Undergraduate

College Students

Likert Exploratory a¼ 0.84–

0.88

2-weeks

r¼ 0.81

to .91

Re-evaluation Belleau and

Summers (1993)

20 General

Population

Undergraduate

students

Likert Exploratory

(Varimax)

a¼ 0.4259 n.a.

17 Bath County

Computer Attidues

Inventory (BCCAS)

Bear et al. (1987) 26 Students 4–12th Graders Likert n.a. a¼ 0.94 n.a.

18 Attitudes Toward

Computers Scale

(ATCS)

Rosen et al.

(1987)

26 General

Population

University

Students (in

multiple studies)

Likert Exploratory a¼ 0.76 n.a.

19 Minnesota

Computer Literacy

& Awareness

Assessment

Instrument

(MCLAA)

Swadener and

Hannafin (1987)

17 Students 6th Graders Likert n.a. a¼ 0.68–

0.74

n.a.

Redesign Comber Colley,

Hargreaves, &

Dorn (1998)

12 Students Secondary

School Students

Likert n.a. a¼ 0.79–

0.88

n.a.

20 Attitudes to

Computers of

Managers in the

Hospitality Industry

Gamble (1988) 17 Managers in

Hospitality

Industry

Hospitality

Workers

n.a. n.a. n.a. n.a.

21 Computer Attitude

Measure (CAM)

Kay (1989) 30 General

Population

Student Teachers Likert Scale

& Semantic

Differential

Exploratory

(Varimax)

a¼ 0.87–

0.94

n.a.

T.M

.Shaftet

al./Computers

inHumanBehavio

r20(2004)661–689

667

Page 8: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 1 (continued)

Entry Scale Author # of

Items

Target

population

Sample for

reliability

analysis

Item

format

Factor

analysis

Internal

consistency

Test–

retest

22 Computer Attitudes

& Learning Perfor-

mance

Gattiker and

Hlavka (1992)

17 Students Students Likert Exploratory

(Varimax)

a¼ 0.68 n.a.

23 Attitudes Toward

Computers Ques-

tionnaire (ATCQ)

Jay and Willis

(1992)

32 Elderly Elderly in a

Community

Home

Likert n.a. n.a. n.a.

24 Attitude Toward

Computer Scale

(ATCS)

Francis (1993) 24 Students BYU

Undergrads

Likert Exploratory

(Varimax)

n.a. n.a.

25 Computer Attitude

Survey

Klein, Knupfer,

and Crooks

(1993)

15 Students Students Likert Exploratory

(Varimax)

a¼ 0.69–

0.83

n.a.

26 Computer Attitude

Scale for Secondary

Students (CASS)

Jones and Clarke

(1994)

40 Students Secondary

Students

Likert n.a. a¼ 0.95 2-weeks

r¼ 0.84

27 Teacher Computer

Attitude Scale

Huang et al.

(1995)

23 Educators Teachers Likert n.a. a¼ 0.73–

0.95

n.a.

28 ETSU Computer

Attitude Scale

Dubois et al.

(1995)

23 Educators Student Teachers Semantic

Differential

n.a. n.a. n.a.

29 Attitudes toward

Technology

Pelton and Pel-

ton (1997)

42 Educators College Students Likert Exploratory

(Varimax)

n.a. n.a.

30 Computer Attitudes

of non-computing

Academics

Seyal et al. (1999) 12 Non-comput-

ing Academics

Academics Likert Exploratory

(Principal)

a¼ 0.79 n.a.

31 Computer Self-effi-

cacy Scale

Young (2000) 48 Students Middle & High

School Students

Likert Exploratory

(Principal)

a¼ 0.64–

0.87

n.a.

668

T.M

.Shaftet

al./Computers

inHumanBehavio

r20(2004)661–689

Page 9: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 669

This diverse set of instruments designed to study attitudes towards computers

dates back to Lee’s (1970) study of social attitudes towards ‘‘electronic brain ma-

chines’’ (p. 55). Since Lee’s pioneering study, many instruments to measure attitudes

towards computers have appeared, particularly after the theoretical link between

attitudes and behavior became well accepted (Ajzen & Fishbein, 1977). Rather than

discuss these instruments in the chronological fashion of the table, we organize ourdiscussion around several issues that we believe are salient to researchers when se-

lecting an instrument with which to measure attitude towards computer. In partic-

ular, we noted when some instruments were particularly long, complex, developed

for a specialized setting, or lack psychometric justification.

4.1. Instrument length

One issue for researchers to consider when assessing attitudes towards computersis its length. The longer an instrument, the more likely participant fatigue could

impact the results (cf. Kerlinger, 1986). Participant fatigue is a threat to internal

validity as it may lead to response bias, such as providing the same response to all

items (Hinkin, 1995). Further, in survey research participant fatigue associated with

long instruments could lower response rates. Participant fatigue is especially salient

in situations in which researchers are measuring multiple constructs in a single study

(e.g. attitudes towards computers and UIS, as well as a variety of individual and

organizational variables) as is frequently necessary in IS research. As such, longerinstruments may be poor choices in such settings. While there is no rule regarding

when an instrument should be considered ‘‘long,’’ we selected 20 items as a cut-off.

Four–six items per construct has been recommended as a goal for construct devel-

opment (Hinkin, 1995). Further, adding items beyond 19 does little to improve in-

ternal consistency (Cortina, 1993). Therefore, with 20 items a scale is more than three

times Hinkin’s (1995) recommendation and beyond what should be required to reach

reasonable levels of internal consistency.

We classified eight instruments as long (entries: 1, 10, 11, 14, 15, 23, 24 and 28).Note that we do not include in this group instruments designed to assess multiple

constructs, which are discussed in the following section. While longer instruments

tend to possess higher levels of internal consistency (Cortina, 1993), many of the

shorter attitude towards computers instruments demonstrate levels of internal con-

sistency similar to that of the longer scales. Therefore, in circumstances where one is

concerned with participant fatigue, one of the more parsimonious instruments is a

better choice.

4.2. Instrument complexity

While we classified an instrument as ‘‘long’’ if it contained 20 or more items, in

some cases longer instruments evolved because researchers were assessing multiple

issues, requiring more complex instruments. We identified 16 (over half of the) in-

struments (entries: 3, 6, 7, 8, 9, 12, 13, 16, 17, 18, 19, 21, 26, 27, 29 and 31) that

intentionally were developed to assess multiple constructs. Other instruments that

Page 10: Reliability assessment of the attitude towards computers instrument (ATCI)

670 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

have been found to possess multiple latent constructs (such as through exploratory

factor analysis), but were not designed to assess multiple issues distinctly are not

included in this count.

The Attitudes Towards Computers (3) (Zoltan & Chapanis, 1982) contains 64

items to assess six views of computers (sound working machines, negatively toned,

desirable and useful, slave to man, useful and fun and stimulating, and ease of use).The Computer Use Questionnaire (6) (Griswold, 1983) assesses applications of

computers and social implications of computers with 10 items for each construct.

The 100-item Cybernetics Attitude Scale (7) (Wagman, 1983) assesses attitudes to-

wards computers with respect to ten sectors of society (society, values, cognition,

education, medicine, counseling, mathematics, banking, politics, and criminal jus-

tice). The Computer Attitude Scale (8) (Loyd & Gressard, 1984) has three sub-

components (computer liking, computer confidence, and computer anxiety). The

three sub-scale structure was confirmed by Bandalos and Benson (1990), but not byWoodrow (1991). Hence, it seems important that any future studies utilizing this

instrument assess the latent structure. An additional consideration is that this in-

strument has also been investigated with a fourth component (attitudes towards

academic endeavors associated with computer training) (Loyd & Loyd, 1985) and

this four factor solution confirmed by Nash and Moroz (1997).

The Computer Attitude Scale [9] (Collis, 1984) examines computer related atti-

tudes and self-confidence. The Attitudes Towards Computer Assisted Instruction

(13) (Allen, 1986) contains components to assess comfort, creativity and function.The Attitudes Towards Computer Usage Scale (16) (Popovich, Hyde, & Zakrajsek,

1987) measures attitudes towards computers as well as attitudes towards other

electronic equipment. The Bath County Computer Attitudes Inventory (17) (Bear,

Richards, & Lancaster, 1987) was developed to predict attitudes towards computers

based on other factors, such as computer usage and experience and educational and

career plans. Hence, it includes items to assess some exogenous variables in addition

to attitudes towards computers. The Attitudes Towards Computers Scale (18)

(Rosen, Sears, & Weil, 1987) measures computer attitudes as well as perceptionsconcerning the impact of computers on future job prospects, job creation and pri-

vacy. The Minnesota Computer Literacy and Awareness Assessment Instrument (19)

(Swadener & Hannafin, 1987) includes items to assess computer self-confidence,

computer utility, sex bias in addition to attitude towards computers. The Teacher

Computer Attitude Scale (27) (Huang et al., 1995) used elements of other scales to

assess four components: sex difference, comfort, value and liking. The Attitudes

Towards Technology instrument (29) (Pelton & Pelton, 1997) was structured to

examine attitudes towards technology, the perceptions of the importance of certaintechnology and confidence to use. The Computer Self-Efficacy Scale (31) (Young,

2000) was developed to consider four dimensions: confidence in using computers,

perception of computers as a male domain, perceived usefulness of computers, and

teachers’ attitudes.

Additionally, three instruments were designed to assess cognitive, affective and

behavioral components of attitude distinctly: The Cognitive and Affective Computer

Attitudes Scale (12) (Bannon, Marshall, & Fluegal, 1985), The Computer Attitude

Page 11: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 671

Measure (21) (Kay, 1989), and Computer Attitude Scale for Secondary Students

(CASS) (26) (Jones & Clarke, 1994).

Although it may be more effective to assess constructs separately, the broader

orientation of these instruments has benefits in research settings that require the

assessment of all or most of the constructs in a particular instrument. If the addi-

tional constructs are not relevant to a particular research setting, then a shorterinstrument would seem to be a better choice to assess attitudes towards computers.

4.3. Instruments with a specialized focus

Much research on attitudes towards computers occurred in educational settings.

We identify eight instruments that target educators (we do not include re-designs of

these instruments in our counts) (entries: 2, 6, 12, 15, 22, 27, 28 and 29). Most of

these instruments have had their latent structure and internal consistency assessed.Their focus makes them especially appropriate in the educational settings for which

they were developed. Examples of the some of the items that focus on educational

issues include ‘‘Computers can be a useful instructional aid in almost all subject

areas.’’ (2); or ‘‘Computer will relieve teachers of routine duties’’ (6).

In addition to the instruments that focus on educators, an additional set focuses

on students. In all 16 instruments, over half of all the instruments we identified,

assess students’ attitudes towards computers (entries: 4, 5, 8, 9, 10, 11, 12, 13, 17, 19,

24, 25, 26 and 31). Examples of the types of items that focus on students include:‘‘Having computers in the classroom would be fun for me.’’ (4), or ‘‘I would rather

have a computer present my instruction than a teacher.’’ (11), and ‘‘Other students

look to me for help when using the computer.’’ (26).

Some of the instruments are quite focused. For instance, the Computer Attitude

Scale (9), (Collis, 1985), the Minnesota Computer Literacy & Awareness Assessment

Instrument (20) (Swadener & Hannfin, 1987), and the Computer Attitude Scale for

Secondary Students (27) (Jones and Clarke, 1994) focus on gender differences. The

Attitude Toward Computer Aided Instruction instrument (13) (Allen, 1986) focuseson professional students. The Computer Attitude Survey (25) (Klein, Knupfer, &

Crooks, 1993) focuses on re-entry versus traditional students. All but one of these

instruments has had its internal consistency assessed, and the latent structure of

many has also been examined. Further, two of these instruments (Beliefs about

Computers (5) Ellsworth & Bowman, 1982; Computer Attitude Scale for Secondary

Students (26) Jones & Clarke, 1994) have been examined with respect to stability

over time. These sets of instruments provide researches in an educational setting with

a variety of choices.Three additional instruments have a specialized focus: the Attitudes to Computers

of Managers in the Hospitality Industry (20) (Gamble, 1988); The Attitudes To-

wards Computers Questionnaire (23) (Jay & Willis, 1992) focuses on attitudes

towards computers among the elderly; and the Computer Attitudes of Non-

Computing Academics (30) (Seyal, Rahim & Rahman, 1999).

A final instrument to consider in this context is the Cybernetics Attitude Scale (7)

(Wagman, 1983). This instrument examines attitudes towards computers with

Page 12: Reliability assessment of the attitude towards computers instrument (ATCI)

672 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

respect to ten sectors of society (society, values, cognition, education, medicine,

counseling, mathematics, banking, politics, and criminal justice). Hence, it appears

that one could use only those items relevant to the particular segment of society

relevant to a study. While the focus on a specific population (such as educators,

students, hospitality managers, the elderly, or non-computing academics) is a

strength in that particular setting, it renders these instruments less appropriate inother circumstances.

4.4. Psychometric issues

When discussing the psychometric properties of these instruments, we focus on

three fundamental properties that provide evidence of reliability: latent structure,

internal consistency and stability. Assessment of these properties is consistent with

Hinkin’s (1995) recommendations for scale development. Latent structure analysisassesses how well the items in an instrument related to the underlying (latent)

construct(s), in this case ‘‘computer attitude,’’ and is conducted via factor analysis.

Confirmatory factor analysis (CFA) is preferred to exploratory factor analysis

(EFA) when prior theory or hypotheses are available (Nunnally & Bernstein, 1994).

Internal consistency analysis investigates the relationship between the items in an

instrument. Internal consistency is usually assessed via Cronbach’s alpha (1951),

which can be conceptualized as the mean of all split-half reliabilities (Cortina,

1993). It is important to assess latent structure prior to internal consistency as it ispossible to identify a single factor even when inter-item correlations are low or to

achieve high inter-item correlations from items related to multiple latent constructs

(Cortina, 1993). Finally, we are interested in an instrument’s stability over time, i.e.

test–retest analysis. Test–retest analysis is conducted by administering an instru-

ment to the same set of subjects at two different time periods then assess the re-

lationship between the two administrations, either through a correlation assessment

or t-test. Given that attitude towards computers is conceptualized as a trait rather

than a state construct, it is reasonable to desire instruments that are capable ofassessing the construct consistently over time. Further, as some researchers would

like to alter participant’s attitudes towards computers based on a manipulation

(such as exposure to a computer system), it seems important to ensure that an

instrument’s assessment of attitude towards computers is stable in the absence of a

manipulation.

When we examine the psychometric analyses conducted on previous instruments

(see Table 1) with respect to latent structure analysis, we find that 65% (20 of 31)

instruments had their latent structure assessed either in the initial or a follow-upstudy. However, all but one instrument’s latent structure was assessed via explor-

atory factor analysis (EFA). EFA is appropriate for those circumstances where there

is not a priori theory to available. In the context of attitude towards computers, a

fairly well established construct, CFA seems appropriate. However, the infrequent

use of CFA may be due to the fact that the structural modeling tools used to conduct

CFA have become wide-spread only relatively recently. It will be interesting to see if

future studies rely more on CFA to examine latent structure.

Page 13: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 673

Assessment of internal consistency has been conducted on 71% (22 of 31) of the

instruments. Of these, all but one relied upon alpha (Cronbach, 1951), the remaining

instrument reports a split-half reliability (entry: 2). Of those assessed via alpha

(Cronbach, 1951), only one instrument (entry: 22) does not report an alpha that

exceeds the recommended 0.70 threshhold (Nunnally & Berstein, 1994). However,

internal consistency alone is not adequate to assess unidimensionality (Cortina,1993) and in those circumstances where internal consistency has been assessed, but

not latent structure, it seems important to assess the latent structure of these in-

struments in future studies.

Finally, we found that only four instruments’ stability over time has been

assessed (entries: 5, 14, 16 and 26). We found this surprising. As noted above, as

a trait variable, it is reasonable that researchers would desire that instruments to

assess attitude toward computers be stable over time. More surprising was that

we were able to identify four studies (entries: 10, 18, 23 and 28) that used in-struments to assess changes in attitude after a manipulation yet we found no

evidence that the instruments have been established to be stable over time.

Without first establishing that the instruments were stable over time, it is difficult

to conclude that changes in attitude were due to the manipulation rather than

random variance.

Each of the above instruments has strengths, and for research in an educational

setting there are appear to be many instruments with which to examine attitude

towards computers. However, for researchers in non-educational settings, thechoices are less obvious. We were able to identify only one instrument for a general

population for which the latent structure, internal consistency, and stability has been

assessed (entry: 16). However, this instrument contains 20 items. As such, a shorter

instrument that addresses the theoretical areas of interest and demonstrates good

psychometric properties would be useful for researchers. In the remainder of this

paper, we describe the development and reliability assessment of the ATCI, which we

believe meets the above criteria.

5. Development of the ATCI

At the time of the ATCI’s initial development, no single instrument had beencommonly accepted for measuring basic attitudes towards computers. The ATCI

was developed consistent with the theoretical argument that attitude is composed of

three elements: affective, behavioral, and cognitive components (Triandis, 1971). In

creating the items, the authors intended to represent the various areas where research

had occurred on the effect of ‘‘affective or evaluative reactions towards using com-

puters’’ (p. 36). Consistent with these criteria, the instrument included cognitive (cf.

Huber, 1973) and affective items (Lucas, 1975). The behavioral items were subdi-

vided into two components, ease of use (Lucas, 1975) and productivity enhancement(Fuerst & Cheney, 1982), consistent with the growing body of literature at the time

that suggested that these two categories were equally important. Fig. 2 shows the

Page 14: Reliability assessment of the attitude towards computers instrument (ATCI)

Fig. 2. Analysis of the attitude towards computer instrument’s theoretical components showing corre-

sponding items.

674 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

relationship between each area and the items that comprise the instrument (see the

Appendix).

The instrument was developing using the semantic differential format for several

measurement-related reasons. First, the semantic differential form ‘‘may be adapted

through choice of concepts and scale to study numerous phenomena’’ (Miller, 1964:

269). Further, research on this format indicates that different types of subjects use

these scales in essentially similar ways (Osgood, Suci, & Tannenbaum, 1957). Fur-

ther, the semantic differential format is especially suited to assessing basic attitudes(Mehling, 1959) and has yielded consistently high reliability and validity scores

across applications (Miller, 1964).

To limit response bias, the items were randomly distributed throughout the ATCI,

and four items were reverse scaled (noted in the Appendix). Reverse scaling was

accomplished by switching the anchors within an item so that a positive response

became a low score (rather than a high score). Reverse scaling decreases the likeli-

hood that a participant will select a response (usually high or low) and give that

answer for every item. When using the instrument, the researcher recodes the scoreassociated with reverse scaled items such that all positive responses are counted as a

high score. While we designed the instrument to include reverse scaling, the use of

reverse scaling has been questioned (Hinkin, 1995). The semantic differential format

allows the reverse scaling to be removed by switching the anchors on those items per

the preferences of the researcher.

Subsequent to its creation, four experienced researchers examined the instrument,

none made recommendations concerning changing the items. The instrument was

then piloted with student subjects. The ATCI’s first use was in research concerningdecision making within a manufacturing information system (Sharfman & Gleeson,

1984). In this study the instrument demonstrated good internal consistency (Cron-

Page 15: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 675

bach a¼ 0.76) and some preliminary ability to predict choices in the context of a

manufacturing information system. Since the data reflected good internal consis-

tency and some predictive ability, there was no justification to alter items. Subse-

quently, other researchers used the instrument (Burkhardt, 1994; Burkhardt & Brass,

1990; Sharfman & Gleeson, 1989; Webster, Heian, & Michaelman, 1990; Webster &

Martocchio, 1990), thereby providing data and motivation for a more comprehen-sive examination of the instrument’s psychometric properties.

6. Reliability analysis

Consistent with our arguments above, we are interested in assessing the latent

structure, internal consistency and stability of the instrument. The combination of

latent structure and internal consistency analysis provides evidence that the instru-ment is assessing a single construct (attitude toward computers) (Cortina, 1993).

Stability analysis via test–retest analysis provides evidence that the instrument is

assessing a stable trait, rather than a temporary state.

6.1. Latent structure analysis

6.1.1. Purpose of the analysis

A first step in assessing the reliability of an instrument is to establish that all itemsin an instrument are measuring the same latent construct. We address this question

with respect to the Attitudes Towards Computers Instrument (ATCI) by conducting

a confirmatory factor analysis to test the hypothesis of unidimensionality. The more

that the items of a scale converge into a single factor, the more confident that one can

be that they measure the same construct. Fig. 3 presents the univariate model for the

instrument that reflects the following specific hypothesis:

H1. A one factor solution fits the items of the ATCI.

6.1.2. Method and analysis

We conducted a CFA using Analysis of Moment Structures (AMOS) software

(Arbuckle, 1997), a structural equations modeling tool. AMOS is in the same class of

analytic tools as LISREL and EQS but provides easier use and more extensive ex-

amination of the fit between models and data. CFA ‘‘involves the specification of one

or more putative models of factor structure, each of which proposes a set of latent

variables (factors) to account for covariance among a set of observed variables’’(Doll, Xia, & Torkzadeh, 1994, p. 453). A sample of 176 juniors and seniors from a

large southwestern university completed the ATCI. The participants were volunteers

in a larger study of computer-based and inventory-management choices (Sharfman

& Gleeson, 1984). The students were volunteers and were given class credit for

participation.

Page 16: Reliability assessment of the attitude towards computers instrument (ATCI)

Fig. 3. A univariate model of attitude towards computers.

676 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

6.2. Results

Table 2 presents the standardized regression weights of the observed items on thecomputer attitude construct (analogous to the factors loadings in an exploratory

factor analysis), the estimated standard error of the estimate and the Critical Ratios

(CR). This CR statistic, distributed as ‘‘t’’, tests the hypothesis that the regression

weight is different from zero. Critical Ratios greater than 1.96 demonstrate signifi-

cance at P < 0:05. The coefficient of ATT8 was set at 1.00 in unstandardized form as

Table 2

Standardized regression weights, standard errors and critical ratios of the observed items

Regression weights Standard error Critical ratio

ATT1 0.56 0.35 4.971

ATT2 0.61 0.26 5.170

ATT3 0.73 0.44 5.574

ATT4 0.72 0.37 5.567

ATT5 0.44 0.24 4.319

ATT6 0.70 0.39 5.494

ATT7 0.57 0.42 4.995

ATT8 0.46 NA NA

Page 17: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 677

the minimum identification requirement for the model (Arbuckle, 1997). As such, no

CR or standard error is calculated for this variable. Note that all CRs exceed 1.96,

and hence are significant at P < 0:05.We calculated several indices to determine the degree to which the hypothesized

model fits the data. The most commonly used index of fit is the v2 statistic. For thisanalysis the v2 is 59.94 with df¼ 20 and P¼ 0.0000 supporting the hypothesis thatthe univariate model is appropriate. We calculated several additional fit indices as

follows: Goodness of Fit Index¼ 0.92, Adjusted Goodness of Fit Index¼ 0.86, In-

cremental Fit Index¼ 0.90 (Bollen, 1989), Comparative Fit Index¼ 0.90 (Bentler,

1990) and v2/df ratio¼ 2.997. All of these statistics indicate good to substantial fit for

the single factor model (Doll et al., 1994), confirming the hypothesis that the uni-

variate model fits the items of the ATCI, thereby supporting the premise that the

ATCI is measuring the single construct of attitude towards computers.

6.3. Internal consistency analysis

6.3.1. Purpose of the analysis

In addition to latent structure analysis, internal consistency analysis is needed to

provide evidence of unidimensionality (Cortina, 1993). Therefore, in this section we

present data relevant to the internal consistency of the instrument to determine if

participants respond consistently across items in the scale. The most commonly

accepted method of assessing internal consistency is provided by Cronbach’s (1951) a(alpha) statistic, with 0.70 considered as sufficient to demonstrate reliability (Nun-

nally & Bernstein, 1994).

6.4. Method and analysis

We present internal consistency statistics from studies conducted by various re-

searchers that have used the ATCI. In total, these studies provide internal consis-

tency analysis data from 349 subjects in three distinct settings (please see theindividual papers for the specifics on each study).

6.5. Results

As one can see from Table 3, the alpha statistics for the ATCI have consistently

exceeded the 0.70 threshold. In fact, over the three samples presented in Table 3, the

average alpha for the instrument was .80. These data provide evidence that the ATCI

demonstrates a high degree of internal consistency. Note that in the past, longerinstruments have been considered superior to short instruments for achieving high

levels of internal consistency. However, recent work in measurement indicates that

shorter instruments can be more reliable than longer instruments as it can be difficult

to develop a large number of items that address the construct space (Embretson &

Hershberger, 1999). Hence, although the ATCI is relatively short, the instrument

achieves good levels of internal consistency because of the items’ tight focus on the

Page 18: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 3

Summary of internal consistency analyses

Reported study Population Sample size Alpha

Sharfman and Gleeson (1989) Business students and

professional production

control personnel

128 0.76

Burkhardt and Brass (1990) Federal employees 75 0.84

Webster et al. (1990) Business students 146 0.82

678 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

latent construct. These levels of internal consistency combined with the CFA, pre-

sented earlier, provide evidence that the ATCI is unidimensional.

7. Stability analysis

7.1. Purpose of the analysis

As the ATCI appears to be unidimensional, the next step in reliability analysis is

to assess whether responses to the instrument are stable over time (Kerlinger, 1986).Only after an instrument has been demonstrated to be stable over time, can re-

searchers conclude that changes detected by the instrument are due to actual changes

in attitude (as influenced by manipulations) rather than measurement error. The

length of the retest period is a function of what one is trying to measure. It is critical

to determine the nature of the construct underlying their instrument so one can select

the proper interval between test–retest administrations. If the interval is too short,

test–retest ‘‘reliability exhibits carryover effects. . . when subjects remember their

responses or become uncooperative. When the interval is too long, the actual atti-tudes or situations may change and the responses are no longer comparable’’

(Galletta & Lederer, 1989, p. 424).

Following the procedures of previous studies of test–retest reliability in the IS

literature, we assessed test–retest reliability over both a short and long interval (cf.

Torkzadeh & Doll, 1991). The short interval (2.5 h) allows us to ‘‘detect effects of

such temporary conditions as fluctuations in attention or a �momentary set� (i.e.transient bias) for particular items’’ (Galletta & Lederer, 1989, p. 424). For the long

interval we selected a 4-week time frame. A 2-week interval is considered necessaryto assess long range stability (Nunnally & Bernstein, 1994) and has been used in

previous IS research to examine user information satisfaction (Torkzadeh & Doll,

1991). However, user satisfaction is considered a state rather than trait variable

(Galletta & Lederer, 1989). Therefore, as attitude towards computers is conceptu-

alized as a state variable, we believed that a longer interval was appropriate for the

ATCI and selected 4 weeks for the longer interval. This interval it is long enough to

assure that participants would not remember their responses to the first adminis-

tration of the instrument and any instability in attitude would likely becomeapparent.

Page 19: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 679

Two separate studies were performed so that the results from the short interval

would not contaminate those from the long interval. For both intervals, the key to a

test–retest analysis is the rejection of a hypothesis of differences based on when the

instrument is administered. Therefore, for both intervals, the analysis tests the fol-

lowing hypothesis:

H2. There is no difference in responses to the ATCI as a function of when the in-

strument is administered.

7.2. Short interval test–retest: methods and analysis

The subjects for this test–retest were freshmen and sophomores in a quantitative

methods course at a Southwestern university. On the administration day, 25 students

were present. At the beginning of the class, the authors administered the instrument toall students. Participation was voluntary, and no class credit was given, but students

used class time to complete the instrument. Participants were told that the data were

being collected as part of an overall study about computer use. The second adminis-

tration was conducted approximately 2.5 h later. Only 19 students also participated in

the second administration of the instrument, six students left class prior to the second

administration. The use of student participants in this type of research is appropriate

because only the psychometric properties of the instrument are of interest, not the

effect of context or experience on behavior (cf. Gordon, Slade, & Schmitt, 1986).A score for each participant was computed by first reverse scaling the indicated

items (see the Appendix), then summing responses and dividing by the numbers of

items to place scores on the same metric as the original items. The Cronbach alpha

for this sample was a¼ 0.91, n¼ 25, for the first administration and a¼ 0.85, n¼ 19,

for the second administration.

We examine test–retest reliability by examining both the t-test for differences

between the administrations and the correlations between the administrations. In the

case of t-tests for differences between the two administrations, a stable instrumentwill demonstrate no differences between the two sets of results. Hence, one wishes to

retain the null hypothesis, a weak hypothesis test. Therefore, to be conservative, we

set a at 0.1. For the correlation analysis, a stable instrument will demonstrate a high

level of correlation between the two administrations. Hence, one wishes to reject the

null hypothesis of no association, a strong hypothesis test. Therefore, to be con-

servative we set a at 0.05. In the case of the t-test (weak hypothesis test) we also

present power analysis for both levels of a.

7.3. Short interval test–retest: results

Table 4 presents the results of the t-test used to examine the application of Hy-

pothesis 2 in the short interval test–retest analysis. The mean difference between the

administrations was 0.2, and the t-test on the difference was not significant

(t¼)1.17, P¼ 0.26, df¼ 18). These results retain the hypothesis that there is no

difference in scores on the ATCI as a function of when it was administered. The

Page 20: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 4

Two and one half-hour test–retest interval statistics

Administration Number of cases Standard mean Standard

deviation

Error

Means and standard deviations for the two administrations of the ATCI

First 19 5.49 1.27 0.29

Second 19 5.69 0.89 0.20

Mean difference Standard

deviation

Degrees of

freedom

t-value Significance of

t-value

Results of analysis comparing the two administrations of the ATCI

0.20 0.76 018 )1.17 0.26

680 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

correlation between the administrations of the instrument was significant (r¼ 0.81,

P < 0:005), indicating that we may reject the null hypotheses of no relationship

between the two responses provided in the two administrations.

As the t-test analysis results in a weak hypothesis test (retaining a null hypothesis),

we conducted a power analysis to establish whether the statistical test possessed

adequate power to detect any differences in attitude. As recommended by Cohen(1988), effect sizes were set as small (0.2 of a standard deviation difference between

population means), medium (0.5 of a standard deviation difference between popu-

lation means), and large [0.8 of a standard deviation difference between population

means) and two levels of Type I errors were examined (a¼ 0.05 and 0.1)]. As we

present in Table 5, given a small effect size and a¼ 0.05, and a¼ 0.10, the power of

the t-test was 0.28 and 0.40, respectively. The power for a medium effect size at

a¼ 0.05 was 0.78, just slightly below the 0.8 recommended by Cohen (1988). All

other power levels were greater than 0.995. These power levels are comparable tothose reported by previous IS researchers (Baroudi & Orlikowski, 1989; Torkzadeh

& Doll, 1991). Further, in the context of this study, a small effect size is the difference

attributed to one person in six changing his or her response by a value of one be-

tween the two administrations of the instrument (based on the standard deviation

observed in the test–retest study). Therefore, the amount of change attributed to a

small effect size is negligible and may not be of interest to many researchers. As such,

it is likely that the somewhat lower power associated with a small effect size will not

concern most researchers. The hypothesis test and follow up power analysis provideevidence that subjects’ responses to the ATCI were stable over the short time

interval.

Table 5

Two and one half-hour test–retest interval-power associated with three effect sizes and two levels of Type I

error

Effect size Level of Type I Error

a¼ 0.05 a¼ 0.10

Small (0.2) 0.28 0.40

Medium (0.5) 0.78 0.86

Large (0.8) >0.99 >0.99

Page 21: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 6

Four-week test–retest interval statistics

Administration Number of cases Mean Standard

deviation

Standard error

Means and standard deviations for the two administrations of the ATCI

First 70 5.57 0.798 0.095

Second 70 5.46 0.782 0.093

Mean difference Standard

deviation

Degrees of

freedom

t-value Significance of

t-value

Results of analysis comparing the two administrations of the ATCI

0.1071 0.621 69 1.44 0.153

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 681

7.4. Long interval test–retest: method and analysis

In this study, participants completed the instrument on two occasions approxi-

mately 4 weeks apart. This time interval should eliminate potential ‘‘carryover ef-

fects’’ associated with short intervals (i.e. when subjects might remember their

responses) and also situations associated with overly long intervals that may render

the responses no longer comparable (i.e. when the attitudes or situation may change)

(Galetta & Lederer, 1989). For the long interval test–retest study, 112 junior andsenior business students from two Southwestern universities participated. Partici-

pation was voluntary, no class credit was provided. The first administration of the

instrument was during the first week of class and the second occurred approximately

4 weeks later. Participants were told that the data were being collected as part of an

overall study about computer use. During this study, we tested the same hypothesis

and created subjects’ scores in the manner used in the short interval test–retest.

Cronbach’s alpha for the first administration in the 4-week test–retest was a¼ 0.82,

n¼ 106, and for the second administration it was a¼ 0.80, n¼ 76. Consistent withour approach for the short interval test–retest analysis, we set a at 0.1 for the t-test ofdifferences between the two administrations of the instrument and a¼ 0.05 for the

correlation analysis.

7.5. Long interval test–retest: results

Seventy of the 112 subjects completed the instrument on both occasions. Table 6

presents the results of these analyses. The t-test comparing each individual’s responsesat the two points in time was not significant (t¼ 1.44, P¼ 0.153, df¼ 69), therefore we

retain the null hypothesis of no differences between the two administrations of the

ATCI. 1 Further, the correlation between participants’ scores on the two adminis-

trations was significant (r¼ 0.69, P¼ 0.001), therefore we reject the null hypothesis of

no relationship between the responses to the two administrations of the ATCI.

1 A between-groups t-test did not detect a significant difference between the two groups of students

(P¼ 0.893 comparing the two groups on the first administration, P¼ 0.678 on the second administration).

Because a difference was not detected, the responses were combined to provide a single set of data.

Page 22: Reliability assessment of the attitude towards computers instrument (ATCI)

Table 7

Four-week test–retest interval-power associated with three effect sizes and two levels of Type I error

Effect Size Level of Type I Error

a¼ 0.05 a¼ 0.10

Small (0.2) 0.56 0.68

Medium (0.5) >0.99 >0.99

Large (0.8) >0.99 >0.99

682 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

The test–retest results presented in Table 6 indicate that respondents’ attitudes

towards computers were consistent over time. Again, the issue of power arises.

Therefore, we conducted a power analysis using the same effect sizes and levels of

Type I error as for the short test–retest interval. Table 7 presents the results of the

power analysis for the long test–retest interval. Given a small effect size and a¼ 0.05

and a¼ 0.10, the power of the t-test was 0.56 and 0.68, respectively. The power levels

for detecting the other effect sizes were greater than 0.99, which exceeded the .8

recommended by Cohen (1988) and compares favorably with those reported byprevious IS researchers (Baroudi & Orlikowski, 1989; Torkzadeh & Doll, 1991).

The test–retest results demonstrate the instrument’s ability to measure attitudes

towards computers reliably over both a short and a relatively long time interval. The

respondents did not show a significant shift in attitudes towards computers over

either time period. Further, adequate statistical power existed to detect all but trivial

changes in scores.

8. Summary of reliability findings

The conclusion from the latent structure, internal consistency and stability

analyses is that the ATCI can be considered highly reliable. We summarize the re-

sults of these analyses in Table 8. From the latent structure analysis conducted via

CFA, it appears that the instrument is assessing a single latent construct. Further,

based on the internal consistency analysis assessed via Cronbach’s alpha (1951),

participants appear to respond to the instrument in a consistent fashion. Finally,participants’ responses to the instrument appear to be stable over both short (2.5 h)

and long (4-week) time intervals. In addition to the evidence described above,

Table 8

Summary of reliability assessment of the attitude towards computer instrument (ATCI)

# of

Items

Target

population

Sample for

reliability

analysis

Item

format

Latent

structure

analysis

Internal

consistency

Test–retest

2.5-h

interval

4-week

interval

8 General

population

Students Semantic

differential

Confirmatory

factor

v2 ¼ 59.94,

df¼ 20,

P¼ 0.0000

a ¼ 0.76–0.91 t¼)1.17,P¼ 0.26;

r¼ 0.81,

P\0:005

t¼ 1.44,

P¼ 0.15;

r¼ 0.69,

P¼ 0.001

Page 23: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 683

Burkhardt’s (1994) data demonstrates that scores on the ATCI remained stable for

12 months.

9. Discussion

When we contrast the ATCI with previous attitudes towards computers instru-

ments we find that the ATCI possesses qualities that may make it appealing to

other researchers. The ATCI is relatively short (eight items) which limits problems

associated with participant fatigue. Further, the ATCI was designed as a general

assessment of attitude towards computers, rather than for a specific setting as with

over three-fourths (25 of 31) of the other instruments. Whether or not a specific

focus is a strength or a weakness depends upon the research setting. The use of the

semantic differential provides the ability to include reverse scaling to limit responsebias, but allows other researchers to switch anchors if consistent scaling is

preferred.

When we consider the results of the reliability analysis, the ATCI compares fa-

vorably with other instruments. Although latent construct analysis has been con-

ducted on 65% of the instruments, only two (besides the ATCI) relied on CFA. The

reliance on EFA to assess the structure of earlier instruments may be attributed to

the fact that CFA has become easily accessible only since the somewhat recent ad-

vent of Structural Equations Modeling tools such as LISREL and AMOS. However,it would be beneficial to assess the latent structure of these instruments in future

studies that rely on them.

In contrast to relatively infrequent use of CFA, assessment of internal consistency

via Cronbach’s alpha (1951) is quite wide-spread. The alphas reported for the ATCI,

like most other attitude towards computers instruments, exceeds the recommended

0.70 threshold (Nunnally & Berstein, 1993).

We also provided evidence of stability over time through short and long interval

test–retest analysis. The ATCI is one of only five attitude towards computers in-struments for which stability has been assessed. Of the four other instruments, three

are specific to an educational setting. Given the long history and relatively

wide-spread use of this construct, we were surprised that the stability of so few in-

struments had been assessed. The ATCI’s evidence of stability provides future re-

searchers with confidence that changes detected in a study are due to actual changes

in individual’s attitudes rather than random variation or measurement error. It also

provides further evidence that attitude toward computers is a relatively stable trait

rather than a state variable.

10. Future research

Having demonstrated the psychometric properties of the ATCI, it is reasonable

to discuss its possible role in future IS research. As argued earlier, based on the

theory of reasoned action (Fishbein & Ajzen, 1975), a better understanding of

Page 24: Reliability assessment of the attitude towards computers instrument (ATCI)

684 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

computer-related attitudes will increase our understanding of computer-related be-

haviors. Several researchers have demonstrated that the attitudes towards computer

construct affects users’ decisions about system acceptance and their overall satis-

faction with computer systems (e.g. Burkhardt, 1994; Sharfman & Gleeson, 1989;

Swanson, 1988). In the future, as information technology becomes more widely

available to users, the ability to measure easily attitudes towards computers with theATCI will allow researchers and practitioners to understand the reasons why some

users embrace information systems and other users remain passive or even resist

using new systems.

For instance, researchers investigating the implementation of new systems may

wish to make use of the ATCI. Because attitude predicts behavior under many

circumstances (cf. Fishbein & Ajzen, 1975), the ATCI may help researchers develop

better theories about who will choose to be information systems adopters as well

who will choose to be non-adopters (e.g. Burkhardt & Brass, 1990). Further, theATCI may even help us understand which individuals are likely to decide to make

greater or lesser use of a system within an ‘‘adopter’’ group.

Practitioners may find it helpful to administer the ATCI prior to implementing a

new system to identify users who might benefit from extra training. If, prior to

implementation, key users were found to have very negative attitudes towards

computers, the implementation leaders could provide extra attention that might

make the difference between a successful and an unsuccessful implementation.

11. Conclusion

We examined existing instruments that assess attitude towards computers and

found that the majority (over three-fourths) were developed for specific settings,

typically educational settings. Further, although most have had some psychometric

characteristics assessed, we were surprised that only four earlier instruments have

had their stability over time examined. Of these four, only one was applicable to ageneral setting. As such, it appears that a parsimonious, reliable instrument appli-

cable to a general setting would be useful to many research settings. Therefore, we

described the development of and assessed the reliability of the ATCI. The unidi-

mensionality of the ATCI was assessed via latent structure (CFA) and internal

consistency (Cronbach’s alpha) analysis. Instrument stability was assessed via two

test–retest studies (short and long intervals). Results indicated that participants’

responses were consistent and reliable over time. These results provide researchers an

instrument to assess attitudes towards computers that is grounded in theory andjustified psychometrically. 2

When we compare the ATCI to other instruments that measure attitude toward

computers we find that it compares favorably in many respects. It is one of only two

instruments designed for a general setting for which latent structure, internal con-

2 We assessed concurrent and predictive validity in separate studies whose results are available in the

form of a working paper that may be obtained from the first author upon request.

Page 25: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 685

sistency and stability have been assessed. Further, the ATCI is considerably shorter

than the other instruments (8 v. 20 items), which should limit participant fatigue and

response bias. Developing and validating measures such as the ATCI moves the

information systems field one step closer to the goal of a common set of measures

that ‘‘provide a common frame of reference within which to integrate various re-

search streams’’ (Davis et al., 1989, p. 983).

Appendix. Attitudes towards computers instrument

This questionnaire contains eight pairs of adjectives that are used to describe

computers. Please circle the number that best reflects your opinion. Think of com-

puters in general terms and do not dwell on each specific answer.

Page 26: Reliability assessment of the attitude towards computers instrument (ATCI)

686 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

References

Abdel-Gaid, S., Trueblood, C. R., & Shrigley, R. L. (1986). A systematic procedures for constructing a

valid microcomputer attitude scale. Journal of Research in Science Teaching, 23, 823–839.

Ajzen, I., & Fishbein, M. (1977). Attitude-behavior relationsa theoretical analysis and review of empirical

research. Psychological Bulletin, 84(5), 888–918.

Allen, L. R. (1986). Measuring attitude toward computer assisted instructionthe development of a

semantic differential tool. Computers in Nursing, 4(4), 144–151.

Arbuckle, J. L. (1997). AMOS users’ guide version 3.6. Chicago, IL: SmallWaters Corporation.

Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user

satisfaction. Management Science, 29(5), 530–545.

Bandalos, D., & Benson, J. (1990). Testing the factor structure invariance of a comptuer attitude scale over

two grouping conditions. Education & Psychological Measurement, 49–60.

Bannon, S. H., Marshall, J. C., & Fluegal, S. (1985). Cognitive and affective computer attitude scalesa

validity study. Educational and Psychological Measurement, 45, 679–681.

Baroudi, J. J., & Orlikowski, W. J. (1989). The problem of statistical power in MIS research. MIS

Quarterly, 13(1), 87–106.

Bear, G. G., Richards, H. C., & Lancaster, P. (1987). Attitudes towards computersvalidation of a

computer attitude scale. Journal of Educational Computing Research, 3, 207–218.

Belleau, B. D., & Summers, T. A. (1993). Comparison of selected computer attitude scales. Journal of

Consumer Studies and Home Economics, 17, 275–282.

Bentler, P. (1990). Comparative fit indexes in structural models. Psychological Bulletin, 107, 238–246.

Bollen, K. A. (1989). A new incremental fit index for general structural equation models. Sociological

Methods and Research, 17, 303–316.

Burkhardt, M. E. (1994). Social interaction effects following a technological changea longitudinal

investigation. Academy of Management Journal, 37(4), 869–898.

Burkhardt, M. E., & Brass, D. J. (1990). Changing patterns or patterns of changethe effects of a change in

technology on social network structure and power. Administrative Science Quarterly, 35, 104–127.

Coffin, R., & MacIntyre, P. D. (1999). Motivational influences on computer-related affective states.

Computers in Human Behavior, 15, 549–569.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (Rev. ed.). New York: Academic

Press.

Collis, B. A. (1985). Psychosocial implications of sex differences in attitudes towards computersresults of a

survey. International Journal of Women’s Studies, 8, 207–213.

Comber, C., Colley, A., Hargreaves, D. J., & Dorn, L. (1997). The effects of age, gender, and computer

experience upon computer attitudes. Educational Research, 39(2), 123–133.

Cortina, J. M. (1993). What is coefficient alpha? An examination of theory and applications, 78(1), 98–104.

Cronbach, L. (1951). Coefficient alpha and the internal structure of tests. Psychometrica, 16, 297–334.

Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information

technology. MIS Quarterly, 319–340.

Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technologya

comparison of two theoretical models. Management Science, 15(8), 982–1003.

Dambrot, F. H., Watkins-Malek, M. A., Silling, S. M., Marshall, R. S., & Garver, J. A. (1985). Correlates

of sex differences in attitudes towards and involvement with computers. Journal of Vocational Behavior,

27, 71–86.

DeSanctis, G. (1983). Expectancy theory as an explanation of voluntary use of a decision support system.

Psychological Reports, 52, 247–260.

Doll, W. J., Xia, W., & Tokkzadeh, G. (1994). A confirmatory factor analysis of the end-user computing

satisfaction instrument. MIS Quarterly, 453–461.

Dubois, D., Linek, W., Gentsch, K., & McEneaney, J. (1995). Field-based student Attitudes and the

Integration of Technology. Electronic Publication.

Ellsworth, R., & Bowman, B. E. (1982). A ‘‘beliefs about computers’’ scale based on Ahl’s questionnaire

items. The Computing Teacher, 10, 32–34.

Page 27: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 687

Embretson, S. E., & Hershberger, S. L. (1999). The new rules of measurement: What every psychologist and

educator should know. Mahway, NJ: Lawrence Erlbaum.

Fishbein, M., & Ajzen, I. (1975). Belief, attitude, intention, and behavior. Reading MA: Addison-Wesley.

Francis, L. J. (1993). Measuring attitude toward computers among undergraduate college studentsthe

affective domain. Computers in Education, 20(3), 251–255.

Fuerst, W. L., & Cheney, P. H. (1982). Factors effecting the perceived utilization of computer based

decision support systems. Decision Sciences, 13, 554–569.

Galletta, D. F., & Lederer, A. L. (1989). Some cautions on the measurement of user information

satisfaction. Decision Sciences, 20(3), 419–438.

Gamble, P. R. (1988). Attitudes to computers of managers in the hospital industry. Behavior and

Information Technology, 7(3), 305–321.

Gattiker, U. E., & Hlavka, A. (1992). Computer attitudes and learning performanceissues for management

education and training. Journal of Organizational Behavior, 13, 89–101.

Gordon, M. E., Slade, A. L., & Schmitt, N. (1986). The ‘‘science of the sophomore’’ revisitedfrom

conjecture to empiricism. Academy of Management Review, 11(1), 191–207.

Gressard, C. P., & Loyd, B. H. (1986). Validation studies of a new computer attitude scale. Association for

Educational Data Systems Journal, 18(4), 295–301.

Griswold, P. A. (1983). Some determinants of computer awareness among education majors. AEDS

Journal, 93–103.

Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of

Management, 21(5), 967–988.

Huang, S.L., Waxman, H.C., & Padron, Y.N. (1995). Teacher education students’ attitudes toward

educational computing. Electronic Publication.

Huber, G. P. (1982). Cognitive style as a basis for MIS and DSS designmuch ado about nothing?

Management Science, 29, 567–582.

Igbaria, M., & Parasuraman, S. (1989). A path analytic study of individual characteristics, computer

anxiety and attitudes toward microcomputers. Journal of Management, 15(3), 373–388.

Ives, B., Olson, M. H., & Baroudi, J. J. (1983). The measurement of user information satisfaction.

Communications of the ACM, 10, 785–793.

Jay, G. M., & Willis, S. L. (1992). Influence of direct computer experience on older adults’ attitudes

toward computers. Journal of Gerontology, 47(4), 250–257.

Jones, T., & Clarke, V. A. (1994). A computer attitude scale for secondary students. Computers Education,

22(4), 315–318.

Kay, R. H. (1989). A practical and theoretical approach to assessing computer attitudesthe computer

attitude measure. Journal of Research in Computing in Education, 456–463.

Kerlinger, F. N. (1986). Foundations of behavioral research. New York: Holt, Rinehart & Winston.

Kjerulff, K. H., & Counte, M. A. (1984). Measuring attitudes towards computers: two approaches, in

SCAMC8. Silver Spring, MD: IEEE Computer Society Press.

Klein, J. D., Knupfer, N. N., & Crooks, S. M. (1993). Differences in computer attitudes and performance

among re-entry and traditional college students. Journal of Research on Computing in Education, 25(4),

498–505.

Lee, R. S. (1970). Social attitudes and the computer revolution. Public Opinion Quarterly, 34(1), 53–59.

Loyd, B. H., & Gressard, C. (1984). The effects of sex, age, and computer experience on computer

attitudes. AEDS Journal, 67–77.

Loyd, B. H., & Loyd, D. E. (1985). The reliability and validity of an instrument for the assessment of

computer attitudes. Educational and Psychological Measurements, 45(4), 903–908.

Lucas, H. C., Jr. (1985). Why information systems fail. New York: Columbia University Press.

Mehling, R. (1959). A simple test formeasuring the intensity of attitudes.Public OpinionQuarterly, 576–578.

Miller, D. G. (1964). Handbook of research design and social measurement. New York: McKay Press.

Nash, J. B., & Moroz, P. A. (1997). An examination of the factor structures of the computer attitude scale.

Journal Educational Computing Research, 17(4), 341–356.

Nickell, G. S., & Pinto, J. N. (1986). The computer attitude scale. Computers in Human Behavior, 2, 301–

306.

Page 28: Reliability assessment of the attitude towards computers instrument (ATCI)

688 T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689

Nunnally, J. C., & Berstein, I. H. (1994). Psychometric theory. New York: McGraw-Hill.

Osgood, C. E., Suci, G. J., & Tannenbaum, P. H. (1957). The measurement of meaning. Urbana, IL:

University of Illinois Press.

Pelton, L.F., & Pelton, T.W. (1997). Building attitudes: how a technology course affects preservice

teachers’ attitudes about technology. Electronic Publication: Attitude Site..

Popovich, P. M., Hyde, K. R., & Zakrajsek, T. (1987). The developments of the attitudes towards

computer usage scale. Educational and Psychological Measurement, 47, 261–269.

Reece, M. J., & Gable, R. K. (1982). The development and validation of a measure of general attitudes

towards computers. Educational and Psychological Measurement, 42, 913–916.

Rivard, S., & Huff, S. L. (1988). Factors of success for end-user computing. Communications of the ACM,

31(5), 552–570.

Rosen, L., Sears, D. C., & Weil, M. M. (1987). Computerphobia. Behavior Resesarch Methods,

Instruments, & Computers, 19(2), 167–179.

Seyal, A. H., Rahim, M. M., & Rahman, M. N. A. (2000). Computer attitudes of non-computing

academicsa study of technical colleges in Brunei Darassalam. Information & Management, 37, 169–180.

Sharfman, M. P., & Gleeson, W. J. (1984). MRP systems: a behavioral model for more effective

implementation.Proceedings of theAnnualMeeting of theAmerican Institute forDecisionSciences, Toronto.

Sharfman, M. P., & Gleeson, W. J. (1989). MRP systems and the struggle for control. Proceedings of the

20th Annual Meeting of the Decision Sciences Institute, New Orleans.

Stevens, D. J. (1980). How educators perceive computers in the classroom. AEDS Journal, 221–232.

Swadner, M., & Hannafin, M. (1987). Gender similarities and differences in sixth graders’ attitudes toward

computersan exploratory study. Educational Technology, 37–42.

Swanson, E. B. (1982). Measuring user attitudes in MIS researcha review. OMEGA, The International

Journal of Management Science, 10(2), 157–165.

Swanson, E. B. (1988). Information system implementation: bridging the gap between design implementation.

Homewood, IL: Irwin.

Torkzadeh, G., & Doll, W. J. (1991). Test–retest reliability of the end-user computing satisfaction

instrument. Decision Sciences, 22, 26–37.

Triandis, H. C. (1971). Attitude and attitude change. New York: Wiley.

Wagman, M. (1983). A factor analytic study of the psychological implications of the computer for the

individual and society. Behavior Research Methods and Instrumentation, 15, 413-319.

Webster, J., Heian, J. B., & Michaelman, J. E. (1990). Computer training and computer anxiety in the

educational process: an experimental study. Proceedings of the 11th Annual International Conference on

Information Systems, Copenhagen..

Webster, J., & Martocchio, J. J. (1990). A construct validity assessment of a computer playfulness measure.

Working Paper, The Pennsylvania State University.

Woodrow, J. E. J. (1991). A comparison of four computer attitude scales. Journal of Educational

Computing Research, 7(2), 165–187.

Young, B. J. (2000). Gender differences in student attitudes toward computers. Journal of Research on

Computing in Education, 33(2), 204–214.

Zoltan, E., & Chapanis, A. (1982). What do professional persons think about computers. Behavior and

Information Technology, 1(1), 55–68.

Teresa M. Shaft is Assistant Professor of Management Information Systems at The University of Okla-

homa’s Michael F. Price College of Business. She received her PhD in Management Information Systems

from the Pennsylvania State University. Her research interests focus on the cognitive processes of systems

developers, the role of information systems in environmental management, and IT effectiveness. Her re-

search appears in journals including Information Systems Research, Journal of Management Information

Systems, Database Advances, Behavior and Information Technology, and Journal of Industrial Ecology. Her

research has been supported through grants from the US National Science Foundation.

Mark P. Sharfman is Associate Professor of Strategic Management in the Michael F. Price College of

Business at the University of Oklahoma. His articles on research methods, environmental management,

corporate social performance and strategic management have appeared in the Academy of Management

Page 29: Reliability assessment of the attitude towards computers instrument (ATCI)

T.M. Shaft et al. / Computers in Human Behavior 20 (2004) 661–689 689

Journal, Academy of Management Review, Business and Society, Business Horizons, Corporate Environ-

mental Strategies, Decision Sciences, Journal of Business Ethics, Journal of Corporate Citizenship, Journal

of Industrial Ecology, Journal of Management, Journal of Management Studies, Pollution Prevention Review

and the Strategic Management Journal. His research has been supported through grants from the US

Environmental Protection Agency and the US National Science Foundation.

Wilfred W. Wu is a PhD candidate in Management Information Systems at the Michael F. Price College of

Business at the University of Oklahoma. Mr. Wu’s research has appeared in the Proceedings of the

Academy of Management Conference and the America’s Conference on Information Systems. His current

research interests include Technology & Innovation, IS Strategy, and Systems Analysis & Design. He has

worked as an analyst for a number of firms including Viacom Corporation, Time-Warner, Federated D.S.,

and Caterpillar.