Top Banner
INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. ProQuest Information and Learning 300 North Zeeb Road. Ann Arbor, Ml 48106-1346 USA 800-521-0600 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
216

Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Jan 31, 2016

Download

Documents

amie van java

Evaluating information system success in public organizations
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films

the text directly from the original or copy submitted. Thus, som e thesis and

dissertation copies are in typewriter face, while others may be from any type of

computer printer.

The quality of this reproduction is dependent upon the quality of the

copy submitted. Broken or indistinct print, colored or poor quality illustrations

and photographs, print bleedthrough, substandard margins, and improper

alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript

and there are missing pages, these will be noted. Also, if unauthorized

copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by

sectioning the original, beginning at the upper left-hand comer and continuing

from left to right in equal sections with small overlaps.

Photographs included in the original manuscript have been reproduced

xerographically in this copy. Higher quality 6” x 9” black and white

photographic prints are available for any photographs or illustrations appearing

in this copy for an additional charge. Contact UMI directly to order.

ProQuest Information and Learning 300 North Zeeb Road. Ann Arbor, Ml 48106-1346 USA

800-521-0600

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 2: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 3: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

The Pennsylvania State University

The Graduate School

School of Public Affairs

EVALUATING INFORMATION SYSTEM SUCCESS IN

PUBLIC ORGANIZATIONS: A THEORETICAL MODEL AND

EMPIRICAL VALIDATION

A Thesis in

Public Administration

by

Helaiel Almutairi

Submitted in Partial Fulfillment o f the Requirements

for the Degree of

Doctor of Philosophy

May 2001

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 4: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

UMI Number: 3014588

Copyright 2001 by

Almutairi, Melaiel M. F.

All rights reserved.

UMIUMI Microform 3014588

Copyright 2001 by Bell & Howell Information and Learning Company. All rights reserved. This microform edition is protected against

unauthorized copying under Title 17, United States Code.

Bell & Howell Information and Learning Company 300 North Zeeb Road

P.O. Box 1346 Ann Arbor, Ml 48106-1346

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 5: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

We approve the thesis of Helaiel Almutairi.

Date of Signature

Rupert F. Chisholm Professor of Management Thesis Advisor Chair of CommitteeCoordinator for Graduate Programs in Public Administration

bhdi Khosrowpour (ssociate Professor of Information Systems

J^dbert F. Munzenrider Associate Professor dr Public Administration

fPiarof Public Policy and Administration

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 6: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Abstract

Assessing the success of information systems within organizations has been

identified as one of the most critical issues of information system management in both

public and private organizations. In the private sector literature, there are several

conceptual and empirical studies that investigated the issues of how to evaluate

information systems. In the public sector literature, on the other hand, there is a scarcity

in the studies that deal with the evaluation of information systems in public organizations.

This issue is expected to increase in the importance as more usage and investments are

allocated to information systems within public organizations.

The primary purpose of this study is to develop a model that can be used to

measure the success of information systems within public organizations. This study used

the cumulative information research in both public and private organizations to develop

the study model.

In this study, DeLone and McLean’s model was used as the conceptual

foundation for research. This study conceptualized the DeLone and McLean model in

three frames. The outer frame is called the external environment frame, the middle frame

is called the task environment frame, and the inner frame is called the organizational

boundary frame. The DeLone and McLean model proposed that there are six variables

(System Quality, Information Quality, System Usage, User Satisfaction, Individual

Impact, and Organizational Impact) that measure the success of information within the

boundary o f an organization and does not include any external actors in the evaluation

process. A seventh variable, External Environment Satisfaction, was added to the DeLone

and McLean model to denote the satisfaction of external actors.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 7: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

In this study, the relationships in the DeLone and McLean model were tested. Six

Kuwaiti public organizations were randomly selected as the study’s sample. A survey

methodology was chosen to collect data. A total of 363 usable questionnaires were

obtained. Factor analysis, correlation analysis, regression analysis, and path analysis were

used to analyze the study’s model.

Initial findings of this study did not support the DeLone and McLean model as it

was originally proposed. The findings indicated that information systems success is a

three variables model. This model proposes that Satisfaction affects Individual Impact

that, in turn, affects Organizational Impact. Also, Satisfaction directly affects

Organizational Impact. Based on the research findings, several implications for public

administration theory and management and future research are stated and proposed in the

conclusion.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 8: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

V

TABLE OF CONTENTS

Page

LIST OF FIGURES *

LIST OF TABLES »

LIST OF ABBREVIATIONS «ii

Chapter 1 INTRODUCTION l

Chapter 2 LITERATURE REVIEW 5

External Environment and Information Systems in the Public Sector 6

Studies of Information System Success 13

System Quality 14Measures of System Quality 19

Information Quality 20Measures of Information Quality 22

System Use 23Measures of System Use 27

User Satisfaction 29Measures of User Satisfaction 32

Individual Impact 33Measures of Individual Impact 34

Organizational Impact 35

Measures of Organizational Impact 36

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 9: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Integrated Models o f Information System Success 37

Literature Abstract and Assessment 46External Environment and Information Systems in Public Sector 46

Studies of Information System Success 47

Integrated Models of Information System Success 49

Chapter 3 RESEARCH METHODOLOGY 52

Model Formulation 52

Model to be Tested 60Research Question and Hypothesis 61

Operationalization 63System Quality and Information Quality 63

System Use 64

User Satisfaction 64

Individual Impact 64

Organizational Impact 65

Population and Sample 66

Translation and Pilot Study 67

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 10: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Data Collection Methods 70

Data Screening and Reliability of Measurement Instruments 74

Limitations of the Study 76

General outline o f plan for Data Analysis 77

Chapter 4 RESEARCH FINDINGS 78

Respondent Characteristics 78Age and Education 79

Gender 79

Length of Government Career and years of service in the current organization 80

Information System Experiences 81

Correlation Analysis 83

Factor Analysis 85Factor Analysis of the Independent variables (SQ, IQ, US, SU) 86

The System Quality scale 87

The Information Quality Scale 89

The System Usage Scale 89

The User Satisfaction Scale 90

Factor Analysis of the Dependent Variables (IM, 01) 95

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 11: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

viii

The Individual Impact Scale 96

The Organizational Impact Scale 96

The Implications of Results of Factor Analysis on the Study’s Model 96

Modifying the 3 b Equations 99

Modified Research Question and Hypothesis 100

Scales Reliabilities 100

Second Round of Correlation Analysis 105

Regression Analysis 106First Regression Analysis: Regressing of Individual Impact on Satisfaction 108

Second Regression Analysis: Regressing OrganizationalImpact on Individual Impact 110

Third Regression Analysis: Regressing OrganizationalImpact on Satisfaction and Individual Impact 111

The Implications of Results of the Regression Analysis onthe Study’s Model 112

Path Analysis 114Findings of Path Analysis 114

A Comparison between the Results Produced by Regression Analysis and Path Analysis 120

Chapter 5 SUMMURY, CONCLUSION, ANDRECOMMENDATIONS 124

Summary of the Findings 124

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 12: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

ix

Potential Contributions and Implications 129

Future Research Directions/Suggestions 134

BIBLIOGRAPHY 136

APPENDIXES 156

Appendix A: English Version of the End Users Questionnaire 157

Appendix B: English Version of the Management Questionnaire 162

Appendix C: Letter of Approval from the Human Subjects Committee atPennsylvania State University 165

Appendix D: Letters o f Approval from Participating Ministries 167

Appendix E: Signed Letters from the Translators 179

Appendix F: Arabic Version of the End Users Questionnaire 182

Appendix G: Arabic Version of the Management Questionnaire 193

Appendix H: English Version of the Cover Letter 196

Appendix I: Arabic Version of the Cover Letter 198

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 13: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

X

LIST OF FIGURES

Figure Page

1 The DeLone and McLean Model of IS Success 15

2 The Seddon and Kiew Model of IS Success 40

3 The Glorfeld Four Variables Model Of IS Success 42

4 The Seddon Model of IS Success 43

5 Comprehensive Model for Evaluating IS in Public Organizations 54

6 Model to be Tested in this study 62

7 Study Model after Factor Analysis 98

8 Study Model after Regression Analysis 113

9 Study Model after Path Analysis 116

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 14: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

xi

LIST OF TABLESTable Page

1 Questionnaires Distribution and Response for Six Ministries 73

2 Reliability of Measurement Instruments 75

3 Respondent Profile: Personal Characteristics 80

4 Respondent Profile: Professional Characteristics 82

5 Pearson Correlation Matrix of the Six Variables in the Study 83

6 Eigenvalue of Factors 87

7 Factors of Independent Variables: Rotated Factor Matrix 89

8 Summary of Items Eliminated from Further Analysis 91

9 Summary of Items Loadings 92

10 Eigenvalue of Factors 95

11 Factors of Dependent Variables: Rotated Factor Matrix 97

12 Scale Reliability of the Satisfaction Variable 101

13 Scale Reliability of the System Usage Variable 102

14 Scale Reliability of the Individual Impact Variable 103

15 Scale Reliability of the Organizational Impact Variable 104

16 Pearson Correlation Matrix Of the Four variables in the Study 106

17 Summary of Simple Regression Analysis for Variable PredicatingIndividual Impact 109

18 Summary of Simple Regression Analysis for Variable PredicatingOrganizational Impact 110

19 Summary of Multiple Regression Analysis for Variables PredicatingOrganizational Impact 111

20 Summary of Standardized Path Coefficients of Paths in the Model Produced by the Path Analysis 117

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 15: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

xii

21 Summary of Direct, Indirect, Total Effects of Research Model Variables 118

22 Measures of Goodness of Fit for the Model Produced by the PathAnalysis 119

23 Summary of Relationships Found among the Variables in the Study’sModel Using Regression Analysis and Path analysis. 121

24 Summary of Direct, Indirect, Total Effects of Research Model Variablesdetermined by Regression Analysis and Path Analysis 122

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 16: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

List of Abbreviations

IS: Information System

SQ: System Quality

IQ: Information Quality

SU: System Use

US: User Satisfaction

II: Individual Impact

OI: Organizational Impact

OB: Organizational Boundary

EES: External Environment Satisfaction

STIS: Satisfaction

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 17: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Chapter 1

1

INTRODUCTION

Information systems are widely used in public organizations. These systems are

particularly appropriate because public organizations are, by their nature, information

intensive. As such, they need information management systems to collect, store, and

retrieve large volumes of information. Consequently, many public organizations have

invested substantial resources in information management systems.

The use and investment in information management systems by public organizations

will continue to increase for two reasons. First, today, virtually everyone is using some type

of information system in their day-to-day activities. Public organizations cannot afford to be

left behind technologically, since many private citizens are using these systems to manage

their personal information and, at the same time, these consumers expect to use these same

technologies to communicate with the government agencies they interact with. Second,

virtually every effort to enhance the effectiveness and efficiency of public organizations

mandates the use of information systems to improve service delivery and reduce costs (e.g.,

reinventing government movement).

With the universal use of, and investment in, information systems, one would expect

there to be an extensive body of literature concerning research into the use of information

systems in the public sector. However, this is not the case. The first authors to articulate a

case for a separate line of research for the use of information systems in public organizations

were Bozeman and Bretschneider (1986). Bozeman and Bretschneider justified this separate

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 18: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

2

line of research based on the argument that MIS literature in the private sector overlooks the

effect of external environmental variables on information systems, which is a distinguishing

characteristic of public organizations. They proposed a new line of research to be called

Public Management Information Systems (PMIS) and, since the 1980s, there have been

many researchers who have contributed to this field.

As the field grew, however, the PMIS literature did not mature to meet the needs of

practice. One particular area that is in urgent need of further exploration is the evaluation of

information systems currently in place in public organizations. In the current environment,

with the substantial investment in information systems and the push to develop

performance-based public organizations, public sector managers are handicapped by a lack

of appropriate instruments to measure the success of their information systems and, in turn,

are unable to justify investment in existing and future information systems. This is

supported in Caudle, Gorr, and Newcomer (1991) and Swain (1995), whose investigations

of key issues facing public sector managers found that the need to be able to measure

effectiveness was ranked highly.

The current contribution to PMIS research in this area is limited to several theoretical

studies (Stevens & McGowan, 1985; Bozeman & Bretschneider, 1986; Newcomer, 1991).

Researchers in this area argued that external players must be taken into account when

evaluating information systems. Valid measures, however, are in short supply, if they exist

at all. The public information system management literature must mature more quickly to

afford enable public sector managers the necessary instruments to measure their information

systems.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 19: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

3

Evaluating information systems is just as important in the private sector (Brancheau

& Wetherbee, 1987; Palvia, Palvia, & Zigli, 1992; Kim & Kim, 1999). However, unlike the

body of PMIS literature, there is no dearth of commentary and literature - either theoretical

or empirical - on evaluating information systems in the private sector workplace (King &

Rodriguez, 1978). The development of research in this area started with an emphasis on

efficiency, using a single measure for success. Most often, this single measure was based on

economic analysis. Researchers, however, shifted their emphasis toward user effectiveness

by focusing on user satisfaction, usage, information quality, system quality, and

organizational impact, although the single measure was still used to measure success.

More recently, however, an increased awareness of the complexity of evaluating

information systems issues has prompted several researchers in this area to question this

approach and to doubt any proposals that single criteria are effective as definitive success

variables (Kanellis & Paul, 1999). Consequently, more pluralistic approaches have started

to appear in this area of research. These approaches are based on the interpretations of case

studies rather than surveys and laboratory experiments (ibid). Several models for evaluating

information systems have emerged from these pluralistic approaches. These models attempt

to capture key dimensions of success and the interaction between these dimensions.

However, these models have not been comprehensive enough to include the external

environment, and have rarely been tested empirically.

The main objective of this study is to develop a comprehensive model to help public

sector managers evaluate their information management systems. Viewed in systems terms,

the model will provide public sector managers with the basic feedback function as well as

provide a necessary component for organizational learning. Literature concerning the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 20: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

4

successful implementation of information management systems - in both the private and

public sectors - will be used to develop this model. First, a comprehensive theoretical

model will be proposed, then part of the model in this study will be empirically tested, as a

first step in developing a more comprehensive model. Established measures taken from the

existing literature will be used in operationalizing the model.

This is the first comprehensive study concerning both the internal organizational

variables and external environmental variable to be conducted in the public sector in

Kuwait. Thus, there is a dynamic opportunity to provide critically needed knowledge on the

dimensions of information systems success in the public sector, on the interplay between

these dimensions, and on the relative importance of these various dimensions. This study

will enrich the PMIS literature and help assess the usefulness of existing concepts, models,

and instruments.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 21: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Chapter 2

5

LITERATURE REVIEW

This chapter presents three bodies of literature. Section one presents studies that

have investigated the relationship between the external environment and information

systems within public organizations and the implications of this relationship on evaluating

information systems in the public sector. The common denominator of these studies is the

emphasis on the importance of external variables in evaluating information systems in the

public sector.

Section two presents studies that have evaluated information system success. Most

of these studies were conducted in the private sector. These studies investigated and

analyzed different dimensions of information system success and how these dimensions are

related to other organizational variables (e.g., task characteristics, race, user participation,

job satisfaction, etc.). Most of these studies focused on one or two dimensions of evaluating

information system success.

Section three presents studies that have attempted to develop comprehensive models

for evaluating information systems success by integrating the dimensions identified in the

studies in Section two.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 22: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

6

External Environment and Information Systems in the Public Sector

Several information system researchers have emphasized the dependency of public

organizations on the external environment. On one hand, this dependency mandates that

public sector organizations design and manage effective information systems to enable these

organizations to collect, store, and disseminate information about their environments -

especially in a highly turbulent environment requiring effective techniques for monitoring

changes in the environment. On the other hand, IS managers in public organizations need to

take this dependency on the environment into account in IS design. The following

paragraphs will present a review of the implications of this dependency on the environment

on the management of information systems within public organizations, especially in the

area of evaluating information systems.

Stevens and McGowan (1985) attempted to develop a framework for public

information systems using a systems and contingency perspective. In their model, the

writers viewed both inside the organization and the external environment as composed of

subsystems. They asserted that the organization is composed of different management

levels (e.g., strategic, mid-level or coordination level, and operational level) and different

functions (e.g., human resource, financial, planning). According to the writers, each of these

functions and levels could be considered a subsystem that has its specific type of

information, decisions, and objectives. Regarding the external environment, the writers

proposed three types of environments. The first type is the operational environment, which

includes the external actors that are highly significant to the public organization, such as

interest groups, legislators, and service recipients. The second type of external environment

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 23: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

7

is called the general environment, and includes all of the external actors that operate in the

public organization environment, such as economic variables, technology variables, and

demographic variables. In the third type of external environment (the remote environment)

the writers included intangible factors that a public organization managers deals with when

he performs his functions, such as uncertainty, complexity, and threats.

The role the information system plays in the public sector organization is greatly

influenced by these external and internal subsystems. According to the researchers, when

public organization managers do strategic planning, they must take into account the

expectations of major outside and inside interests. One approach is to develop a database

that incorporates these expectations.

Another example is that in the operational environment there are legislative,

executive, judicial, and financial/budgetary controllers who impose certain authority and

financial standards on public organizations. For example, often, public organizations are

obliged to follow several legislative statutes (e.g., paper reduction acts) intended to improve

the internal operation of these organizations. In response to these standards, public

organization information structures should be able to generate relevant information for both

external reporting and internal control.

Stevens and McGowan (1985) identified several criteria to use to evaluate

information systems in public organizations: 1) accuracy and applicability of information

provided to managers and users, 2) timeliness of information, 3) User Satisfaction, and 4)

acceptance by managers and users. These researchers also proposed that these criteria could

be applied to the internal, operational, and control objectives, as well as the analysis of the

environmental influences that may directly affect internal organizational functions (p. 141).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 24: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

8

In other words, these criteria could be used to assess the success of information system from

the perspectives of both the internal and external users.

Bozeman and Bretschneider (1986) also attempted to develop a model for the Public

Management Information System (PMIS). These researchers strongly believed that external

factors, or what they called the distal environment (e.g., political and economic authorities),

influenced the internal factors in an organization, or what they called the proximate

environment; which include variables that are related to the work context and the attitudes

and behaviors of individuals in an organization. According to the researchers, this strong

external influence on the internal factors is what makes information systems within public

organizations different from those in private organizations.

Consequently, the researchers argued that MIS performance measures should reflect

the unique characteristics of public organizations. According to the researchers in both the

public and private sectors, accountability is important; however, this concept in public

organizations has greater importance as a result of the nature of the distal environment. For

example, public organization managers are more accountable to individuals and groups

outside the organization. Consequently, measurements of performance of information

systems should reflect the system’s ability to

...handle special queries that aggregate data in unanticipated ways, and produce special reports and analysis. These non-routine forms of analysis will have extremely short time frames, thus adding the dimension of timeliness to the measurement of accountability (Bozeman & Bretschneider,1986, p. 482).

Furthermore, the researchers added that timely responses to external requests for data

are concerns when evaluating information systems in public organizations at the

environmental level. According to the researchers, during budget cycles, external political

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 25: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

9

players such as executive branch agencies and legislatures require data that enable these

external actors to evaluate public organizations. These researchers argued that “the degree

to which an organization responds to external data requests in a timely fashion with

appropriate and accurate data can have either positive or negative effects on MIS within the

organization” (Bozeman & Bretschneider, 1986, p. 482).

In an empirical study, Bugler and Bretschneider (1993) studied the adoption of

information systems in public organizations and found that there is a relationship between

external relationships and the adoption of information systems. Organizations that have

closer external relationships are found to have a higher interest in adopting information

systems for the purpose of improving these relationships.

In another empirical study, Bretschneider (1990) tested the following hypotheses:

(1) Public Management Information System managers must contend with a greater level of

interdependency across organizational boundaries than do private MIS managers, and (2)

Public Management Information Systems planning is more concerned with extra-

organizational linkages, while private MIS is more concerned with internal coordination.

After testing these propositions, Bretschneider (1990) concluded

The environment of PMIS differs from that of its private sector counterpart.The difference is primarily in the form of greater interdependencies, leading, at least in part, to increased accountability, procedural delays, and red tape. Secondly, within these more constrained environments, traditional MIS prescriptions are not automatically adopted. This suggests that the environment o f public organizations has led to adaptation of standard management practices. In other words, the organizational environment affects or tailors the nature of management action (p. 543).

Rocheleau (1999) reviewed several cases of information system implementation

projects in several public organizations and concluded that “political factors are often the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 26: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

10

most crucial in determining how successful information technology is” (p. 23). Rocheleau

recommended that “Managers [of information systems] will often have to be involved in

exerting political influence and engage systems outside their direct control in order to assure

a successful outcome” (p. 31). Including outside representatives in the evaluation process is

one form of engaging outside systems.

In studying the adoption of microcomputers in both private and public sectors,

Bretschneider and Wittmer (1993) found that organizational environment (i.e., greater levels

of interdependence across organizational boundaries and higher levels of red tape) and task

environment (i.e., the nature and characteristics of tasks) play major roles in innovation and

adoption of information technology. Thus, these researchers strongly recommended taking

into account the nature of these environments in the management of information systems.

In an empirical study, Mansour and Watson (1980) tested the applicability of the

private sector computer-based information system models in the public sector. The model

tested was:

CBIS performance = /(Computer hardware and software, behavior,structural, and environmental variables)

Under each category, there were several specific variables. Under CBIS

Performance, there were applications’ performance, the degree of integration in the

database, the decision function provided by decision models, the organizational levels

served, and the interfaces between system elements. The Behavior category included

degree of top management involvement in systems development, the effectiveness of

relationships between computer specialists and other organizational personnel, the amount

of resistance to change by organizational personnel, and the quality and quantity of

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 27: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

computer specialists. Under the Structural category, there were the organizational

placement of the data processing function, the frequency with which educational programs

are offered to end users, the availability of interactive computing, and the length of time the

organization has operated a CBIS. Finally, the Environment category incorporated the

amount of competition the organization faces in the marketplace, the variety of products or

services offered by the organization, the frequency with which the organization offers new

products or services, the amount of customer requirements, and the amount of external

regulation.

The variables for each category were selected based on the outcome of a

comprehensive survey of the literature, which identified a list of variables in each category.

Second, a panel of IS experts reviewed the list. Variables were included in the final list

based on the weights that these experts assigned to the variables. The final list of variables

was tested on both private and public organizations, although the Environmental variables

were excluded in the public organization case. The researchers argued that this is due to

“lack of relevance [of the environmental variables] for governmental organizations, given

the way the environmental variables were defined. Governmental organizations function in

an environment that is much different from thai faced by private business organizations” (p.

525). According to these researchers, even among government organizations, there are

considerable differences in the external environment. Consequently, Mansour and Watson

(1980) proposed that

In order to explore fully the impact o f environmental variables on CBIS performance in governmental organizations, it would be necessary to categorize the different types o f governmental organizations, develop appropriate environmental variables for each category, and collect data from organizations in the different categories (p. 526).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 28: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

12

The researchers did not undertake this effort, but it is certainly a possible area for

future research.

Newcomer (1991) argued that users of information systems in public organizations

are not only the members of the organization, but also users that exist in the extended

environment such as legislative, central management and oversight agencies, program

clients, other governmental agencies, suppliers, and media. Thus, Newcomer argued that

these users should be taken into account when evaluating information system.

Moreover, Newcomer proposed specific information system success indicators in

public organizations. These indicators included usefulness and reliability, ease of use, error-

resistant operations, authorized-use controls, protected system and operations, time savings,

system economic payoff or cost result, user acceptance, and contextual considerations

(which includes, among other things, the unique nature of public-sector access and

accountability). Regarding the last indicator, Newcomer (1991) stated, “Public-sector

information system evaluation must consider how well information systems meet numerous

legislative requirements” (p. 383).

Bozeman and Straussman (1990) also suggested taking into account the external

environment in evaluating information systems in public organizations. The researchers

stated

Public officials’ satisfaction (a surrogate for citizens’ satisfaction) with the final set of goods and services is one measure of PMIS...Such measures are important indicators of technological success of PMIS (p. 123).

In summary, the studies reviewed in this section of the literature review indicate that

there is close interdependency between information systems in public organizations and the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 29: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

13

external environment. One implication of this interdependency is the extension of the

evaluation of information system process to include actors in the external environment that

can influence information systems.

Studies of Information System Success

A large number of studies have evaluated information systems in private

organizations. Most of these studies have attempted either to identify factors that influence

the success of the information system, or investigate how to measure information system

success (Glorfeld, 1994). Generally, most of these studies have focused on internal users

and impacts of information systems without taking into account external users and their

impacts on these systems.

In a different approach from the above approaches, DeLone and McLean (1992)

focused on the dependent variable that is information system success. The researchers noted

that there are a large number of studies that have attempted to identify factors contributing to

information system success. These researchers also noted that one of the weaknesses of

these studies is the failure to clearly identify the dependent variable. Consequently, the

researchers organized the literature that was concerned with information system success into

a comprehensive taxonomy for the purpose of giving a more complete view of the

information system success issue. The taxonomy combined four traditional dimensions of

information system success - system quality, information quality, use, and user satisfaction

- with two other dimensions - individual impact and organizational impact. Second, the

researchers developed a comprehensive model for information system success that took into

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 30: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

14

account all six dimensions of information system success and the relationships among these

dimensions (Figurel). Although, DeLone and McLean (1992) argued that contingency

variables such as the environment of the organization being studied should be taken into

account, these variables were not a main dimension of their model.

In the private sector information system literature, DeLone and McLean’s (1992)

taxonomy was described being comprehensive enough to take into account all dimensions of

information systems success (Seddon, 1997; Ballantine et al., 1996). As such, in the

following subsections, DeLone and McLean’s taxonomy will be used to organize findings of

studies that investigated information system success in private organizations. The review

will focus on two things: (1) identifying key variables and relationships among them, and 2)

how the variables were operationalized and measured.

System Quality

Studies examining system quality used features of the systems themselves to assess

quality. Some studies evaluated information systems by investigating how information

systems utilized organizational resources such as materials and financial resources. For

example, Kriebel and Raviv (1980,1982) used microeconomics to develop and test a

mathematical model for evaluating the efficiency of computer services supply in

organizations. They attempted to model the input resources required and the output

products or services provided by the information system department.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 31: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

SystemQuality

IndividualImpact

InformationQuality

UserSatisfaction

Figure 1. DeLone and McLean model o f information system success Source:DeLone and McLean, 1992.

OrganizationalImpact

Page 32: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

In the same vein, Conklin, Gotterer, and Rickman (1982) studied the impact of

background jobs on response times. In this study, the terminal response time was defined as

the interval from the time the operator depressed the transmit key until the response

character appeared on the screen, using a stopwatch to measure the response time. Since

Conklin and colleagues found that longer response time related to decreased user

satisfaction with the system, this study supports the importance of the user’s perception of

system quality.

Using a different approach, a number of studies evaluated the quality of information

systems by examining the organizational effectiveness (i.e., how well the users of the system

are accomplishing their organizational goals) and identifying factors that should exist in an

organization in order to ensure a high quality information system. For example, several

researchers have examined the relationship between user participation in the development of

information systems and system quality (Glorfeld, 1994). Edstrom (1977) investigated the

relationship between users’ influence in the different phases of the system development

process and information system success and found that there is a positive relationship

between users’ influence in the initiation phase and the perceived success of the system.

The participants in this study were asked to rate the implemented information system on a 7-

point Likert-type scale from complete failure to complete success.

Franz and Robey (1986) investigated the relationship between user involvement in

information system development and perceived system usefulness. The study was

conducted on 118 user managers from 34 companies. The researchers found that greater

user involvement in all information system development stages is related to greater

perceived usefulness (surrogate measure of system quality). In the same vein, Kaiser and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 33: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

17

Srinivasan (1980) used the perceived worth of the information system as a measure of

system quality. The researchers found that there is a relationship between user involvement

and group process skills, such as the ability to adapt to change, communication skills, level

of conflict and agreement, and information technology effectiveness. The researchers stated

“clearly, user involvement with the activities of the system leads to higher measures of

perceived worth of the system” (p. 202).

In an experimental study, King and Rodriguez (1981) investigated the relationship

between participation and the users’ perception of the worth of the system (surrogate

measure of system quality). The researchers found support for the relationship between

participation and perceived worth of the system, but that participation did not lead to an

increase in system usage.

Similarly, Tait and Vessey (1988) investigated the relationship between user

involvement and system success. System success was measured using and instrument

developed by Bailey and Pearson (1983), which included several items that measured

system quality. Although the researchers did not find any support for this relationship, they

found that system complexity, time, and financial resource constraints have strong direct and

indirect effects on system success through user involvement.

The interest in the relationship between user involvement and information success

led Torkzadeh and Doll (1994) to develop a measurement for user involvement. The

researchers assessed the short-range and long-range stability of the items that measure

perceived involvement, desired involvement, and involvement congruence using the test-

retest method. The researchers concluded that the instruments are internally consistent,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 34: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

18

stable, and should be used with confidence in user involvement research without concern

about a reactivity effect.

Goslar (1986) investigated the usefulness of several decision support system features

(used as surrogate measure of system quality) for marketing problem solving. Features

examined in this study were the interrogation (e.g., what-if analysis, impact analysis,

sensitivity analysis), computation (e.g., standard arithmetic calculation, complex

mathematical models, cost benefit ration, forecasting (e.g., moving average, regression,

polynomial fit), range analysis (e.g., normal distribution, uniform distribution, general

cumulative distribution), and simulation analysis. Goslar found that interrogation features,

computational features, and forecasting models were considered most useful by DSS users,

while range analysis features were considered the least useful.

Davis (1989), in several empirical studies, found that perceived usefulness (the

effects of the system on work) and perceived ease of use (whether easy to use and interact

with system), which are two surrogates of system quality- are associated with system

acceptance (current and future usage), with correlation coefficients ranging from.45 and.85

respectively. Davis also found that usefulness and perceived ease of use are significantly

correlated with each other (r =.69).

In the context of testing the technological acceptance model, Karahanna and Straub

(1999) found that usefulness (the belief that an information system is useful in job

performance) is affected by perceived ease of use (the extent that an information system is

friendly).

Yuthas and Young (1998) conducted a study to test whether user satisfaction and

system usage are appropriate indicators of decision-making effectiveness (system quality).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 35: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

19

System usage was defined as the extent and nature of use of information system.

Satisfaction was defined as extent of improvement in decision-making outcomes. Yuthas

and Young concluded that user satisfaction and system usage measures are not acceptable

alternatives to direct performance measurement.

Measures o f System Quality

Researchers have used many surrogate measures for system quality, ranging from

single-item scales to multi-item measurements. For example, Barki and Huff (1985) used a

single semantic differential item to measure overall user satisfaction regarding decision

support systems. Similarly, Edstrom (1977) measured the success of information system

through one question by which users rated the implemented system. The multi-item

instruments measured system quality through perceived value or worth, usefulness, and

perceived ease of use. For example, Davis (1989) developed and validated two

measurements for perceived usefulness and perceived ease of use. Each instrument consists

of six items.

Bailey and Pearson (1983) developed and validated instruments to measure general

user satisfaction. Seven items from this instrument were assigned to measure system

quality. This instrument has been validated by several researchers (Ives, 1983; Baroudi &

Orlikowski, 1988; Iivari & Ervasti, 1994; Mahmood & Becker, 1985,1986) and has become

a standardized measure in the MIS field.

Doll and Torkzadeh (1988) developed an instrument to measure end user computing

satisfaction (EUCS). The instrument merged items that measure the quality of information

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 36: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

20

(content, format, and timeliness) with items that measure the quality of the system

(accuracy, ease of use). In the EUCS, there are 13 items, four of which were designed to

measure system quality (ease of use and accuracy). Torkzadeh and Doll (1991) and

Hendrickson, Glorfeld, and Cronan (1994) validated this instrument. Hendrickson and

colleagues conducted their study on public organizations and found that the EUCS measure

is valid and stable over time.

Information Quality

Researchers studying the information quality dimension have examined information

system output (i.e., information quality from users’ perspective), and how several

organizational variables are related to Information Quality. Gallagher (1974) studied the

value of MIS in a medium-sized company using estimated annual dollar values and semantic

differential technique as two measures of perceived value of information (see next section

for more detail). Gallagher found a positive relationship between the perceived value of

information and participation in the design of the system and managerial position. Users

who participated in the design of their information systems evaluated the output of those

systems more favorably than users who did not. The researchers also found that managers

in upper-level managerial positions value MIS reports more highly than those lower in the

hierarchy.

Iivari and Koskela’s (1987) overview of the PIOCO model made a connection

between information system design and information quality. The PIOCO is composed of

three sub-models: P model is defined as “restricted, planned change in the host

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 37: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

21

system/organization” (p. 406). The second model is I/O, which presents the information

system from the viewpoint of the user. The third model is C/O which determines the

internal structure and action of an information system. This study is relevant to the P model,

which takes into account the viewpoints of external users of the information system, such as

interest groups. However, the researchers did not provide the means to measure the effect of

these external players on the quality of information systems. Iivari and Koskela (1987)

justify this by stating

It is difficult to provide effectiveness criteria (schemas) of wide applicability.Due to the diversity of potential effects, the principle of many points of view should be applied to their identification reflecting the various interests involved and taking into account not only the economic effects...but also various social, technical, and managerial effects (p. 414-415).

In the same vein, Mahmood and Medewitz (1985) investigated the relationship

between the selection of a DSS design method and its ultimate success. DSS success was

measured through DSS usage, user satisfaction, and user attitude and perception criteria.

Data was collected from managers, intermediaries, and designers. Among the most highly

rated DSS successes were several items that related to information quality such as accuracy

of DSS reports, useful output reports, and better types of output reports. Consequently, this

study notes the connection between system design and information quality.

Blaylock and Rees (1984) tested the relationship between a decision-maker’s

cognitive style and the output of information system-information. The researchers used

Larcker and Lessig’s (1980) questionnaire measuring usefulness of information by

examining two components: importance of information, and usefulness of information. The

first term is defined as the “quality that causes a particular information set to acquire

relevance to the decision maker” (p. 123). Usefulness is defined as the “information quality

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 38: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

22

that allows a decision maker to utilize the [information] set as an input for problem solution”

(p. 123). The researchers found a strong correlation between cognitive style and usefulness

of information.

In an exploratory field study of five senior executives, Jones and McLeod (1986)

examined where and how senior executives get their decision-making information. The

study’s findings indicated that executives obtain a great deal of information from both the

environment and from informal information sources, and that formal computer-based

information systems do not seem to provide much information directly to the executive.

These researchers have suggested that “executive information systems be conceptualized for

design in the broadest terms possible to include internal and external information sources,

personal and impersonal sources, and a broad spectrum of media (meetings, computer and

non-computer reports, telephone, etc.) that vary in information richness” (p. 244). This

study showed how the external sources of information are important and related to the

quality of information used by an organization’s members.

Measures of Information Quality

Like the preceding dimension, researchers have used many surrogate measures for

information quality. For example, Bailey and Pearson (1983) developed a user satisfaction

instrument, which included nine items that measure information quality: accuracy,

timeliness, precision, reliability, currency, completeness, format of the output, volume of

output, and relevancy. This instrument has been validated by several researchers (Ives et al.,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 39: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

23

1983; Baroudi & Orlikowski, 1922; Iivari & Ervasti, 1994; Mahmood & Becker,

1985/1986), and has become a standardized measure in the MIS field.

Gallagher’s (1974) multi-item measurement assessed information quality by utilizing

two measures of perceived value: an estimated dollar value in response to the following

question:

Assume that your company plans to eliminate all data processing and to obtain this report from another firm on an annual subscription basis. What is the maximum amount you would recommend paying for this report for you? (Gallagher, 1974, p. 48)

The second was a set o f fifteen 7-point semantic differential bipolar adjective pairs

to which the respondent was asked to indicate his opinion of the report. The 7-point scale

ranged from -3 (extremely unfavorable) to +3 (extremely favorable). The score on this

measure of perceived value is the average of responses to all 15 adjective pairs.

Doll and Torkzadeh’s (1988) EUCS instrument included eight items that assessed

information quality. The eight items measured information quality through its content,

format, and timeliness. Each item was scored on a 5-point Likert-type scale.

System Use

The use of an information system, or information system report or output, is one of

the most frequently reported measures of the success of an information system (DeLone and

McLean, 1992). A number of conceptual studies proposed information use as the a measure

oi' information system success. For example, Ein-Dor and Segev (1978) attempted to

identify the organizational context variables affecting the success and failure of MIS.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 40: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

24

Organizational context variables were categorized as uncontrollable (e.g., size, structure,

time frame, extra-organizational situation), partially controllable (e.g., resources, maturity,

psychological climate), and fully-controlled (e.g., responsible executive, steering

committee). System usage was chosen in the study as the measure o f information system

success. The writers asserted that usage was identified as a measure for information system

success is because usage is correlated with at least some of the other criteria used in the

literature to measure success (e.g., profitability, application to major problem of the

organization, quality of decisions and performance, and user satisfaction). In Ein-Dor and

Segev’s words, “these criteria are clearly mutually dependent...we claim that a manager will

use a system intensively only if it meets at least some of the other criteria, and that use is

highly correlated with them” (p. 1065).

Similarly, Hamilton and Chervany (1981) provided a conceptual hierarchy of system

objectives that needed to be considered in evaluating information systems. In this

conceptual hierarchy, the writers combined two perspectives: the efficiency perspective

(how efficiently MIS development and operations utilize assigned resources to provide the

information system to the user) and the effectiveness perspective (the effectiveness of the

user or the organizational units in using the information system in accomplishing their

organizational mission).

Both the efficiency and effectiveness perspectives have certain objectives. The

efficiency perspective’s objectives are the requirements definition for the information, the

resources consumed to provide the information system, the production capability or capacity

of the resources, and the level of investment in resources. The effectiveness perspective’s

objectives are the information provided by the information system and the support provided

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 41: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

25

by the MIS function to users of the system, the use of the information system and its effect

on user organizational processes and performance, and the effect of the information system

on organizational performance. Hamilton and Chervany argued that within each type of

objective there is interdependence among the objectives. In others words, each objective

affects the objective that follows. The linkage between the objectives of the two

perspectives, according to the writers, takes place when the organizational performance

objective (effectiveness perspective) affects the environment, which, in turn, affects the

resource investment objective (efficiency perspective). The writers argued that system

usage could be a measure of information system effectiveness because effects on

organizational objectives and performance “do not follow directly and immediately, but

rather result from use of the information system” (Hamilton & Chervany, 1981, p. 58).

Hamilton and Chervany (1981) made another interesting recommendation to extend

the evaluation of the information system process to include not only the primary user of the

information system but also other people involved in the achievement of information system

objectives, both from the efficiency and effectiveness perspectives.

A number of empirical studies have been conducted to investigate the relationship

between information system usage and other organizational variables. For example, King

and Rodriguez (1978) investigated the relationship between user involvement and system

usage. Their experimental study was conducted with managers enrolled in a part-time MBA

program who had completed virtually all of the program requirements. The researchers did

not find a relationship between user involvement and system usage.

In the same vein, Kim and Lee (1986) investigated the relationship between user

participation and degree of MIS usage. They proposed a four-dimensional model for this

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 42: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

26

relationship: participation characteristics, system characteristics, system initiator, and the

system development environment (includes top management support and overall user

attitudes). There were no external variables included in this model. The researchers found a

relationship between user participation and system usage. Lucas (1975b) investigated the

relationship between decision style, situational and personal factors, attitudes toward

computers, and system usage. The situational variables included in this study were high

potential location, static location, moderate potential location, stable customer base,

transitional customer base, hub office competition, heavy competition, and light

competition. Lucas argued that information system usage is positively related to decision

style, situational and personal factors and attitudes toward computers. Moreover, Lucas

found that positive attitudes toward computers, perceived high-level management support,

and computer potential could be used to predict high levels of information system usage.

Regarding situational factors, Lucas (1975b) concluded

Clearly situational...factors need to be considered in designing accounting and other information systems; the nature of the relationship among these variables will probably be unique and dependent on each organization and its environment (p. 745).

Ein-Dor, Segev, and Steinfeld (1981) tested three proposals related to system usage

and three measures of information system profitability. The three measures of profitability

are actual costs relative to budgeted costs, subjective evaluation of relative resource

requirements, and subjective evaluation of cost savings. The three proposals supported in

this study are

1. the use of an IS increases when it is perceived as profitable and

decreases when it is perceived as not profitable;

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 43: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

27

2. the greater the contribution to improve decisions or performance, the

greater the use of IS; and the lower contribution to improve decisions

or performance, the lower the level of use; and

3. the more satisfied users are with an IS, the greater the use; and the

less their satisfaction, the lower the level of use.

Karahanna and Straub (1999) studied 100 e-mail system users and found that

system use is affected by the medium’s usefulness, which is affected by perceptions of the

ease of use. LISREL 7 was used to analyze the relationships between the variables. The

goodness of fit index for the model of these relationships was .96. In this study, usefulness

is defined as the belief that an information system is useful in the job, while the ease of use

is defined as the extent to which an information system is friendly.

Baroudi et al. (1986) gave empirical evidence that system usage and user satisfaction

are linked. The researchers noted “user information satisfaction is an attitude toward the

information system, while system usage is a behavior” (p. 234). The study provided

evidence that user satisfaction is related to greater system usage (r =.28), although the study

did not identify the direction of this relationship:

Satisfaction -> Usage versus Usage-> Satisfaction

Measures of System Use

Researchers have used a variety of instruments to measure information use. These

instruments range from actual behavior (e.g., Schewe, 1976), documented usage (e.g., Ein-

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 44: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

28

Dor, Segev, & Steinfeld, 1981), to self-reported perception of past usage (e.g., Lucas,

1975c).

Kim and Lee’s (1986) study developed a measure of usage that took into account the

voluntary aspect of the usage. Kim and Lee’s measurements took into account the

frequency of the use and the voluntariness of use. Each was measured on a single item, 7-

point Likert-type scale from 1 (much less frequent use) to 7 (very frequent use). The scale

associated with voluntariness was anchored by 1 (completely mandatory use) and 7

(completely voluntary use). To compute the system usage index, the responses to the two

items are multiplied (thus, the range is from 1 to 49) and the square root of the product is

taken for the purpose of normalizing the scale.

Building on Igbaria (1992) and Igbaria, Pavri, and Huff (1989) and Anakwe,

Anandaeajan, & Igbaria (1998) measured usage through four indicators; which are actual

daily use of the computer, frequency of use, number of packages used by participants, and

number of tasks the system is used for. Their study was conducted on nine organizations in

Nigeria.

Doll and Torkzadeh (1998) developed a multidimensional measure of how

extensively information technology is utilized in an organizational context for decision

support, work integration, and customer service functions. The instrument consists o f 74

items, 62 of which measured System Use, while 12 items measured the impact of IT on

work. Using a pilot sample of 89 usable interviews, the two researchers validated the

instrument.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 45: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

29

User Satisfaction

User satisfaction is the measure of the successful interaction between the information

system itself and its users (Glorfeld, 1994). DeLone and McLean (1992) argued that user

satisfaction has been widely used for the following reasons:

First, ‘satisfaction’ has a high degree of face validity. It is hard to deny the success of a system, which its users say they like. Second, the development of the Bailey and Pearson instrument and its derivatives has provided a reliable tool for measuring satisfaction and for making comparisons among studies. The third reason for the appeal of satisfaction as a success measure is that most of the other measures are so poor; that are either conceptually weak or empirically difficult to obtain (p. 69).

Many researchers have studied user satisfaction and how it is related to other

variables. For example, Mahmood and Becker (1985/1986) tested the relationship between

end users’ satisfaction and organizational maturity of information system. User satisfaction

was measured using Pearson’s instrument. The organizational maturity of the information

system was measured using Nolan’s stage model. Nolan’s model consists of six stages

(initiation, contagion, control, integration, administration, and maturity). Under each stage,

there are several variables that distinguish this stage. For example, among the variables that

distinguish the maturity stage in the area of data processing expenditure is tracks rate of

sales growth and in the area of applications portfolio is application integration mirroring

information follows. The researchers found a weak direct correlation relationship between

variables in the maturity stage and the level of User Satisfaction.

Ginzberg (1981) investigated the relationship between users’ expectations and users’

satisfaction. A single item that measures the overall satisfaction with the information

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 46: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

30

system measured user satisfaction. The study’s findings indicated that users who maintain

realistic expectations prior to implementation were more satisfied with the system and used

the system more than users whose pre-implementation expectations were unrealistic.

Lu and Wang (1997) tested the relationship between user satisfaction, management

styles, and user participation. The study was conducted on IS managers who work in

companies in Taiwan. The researchers found that user participation is not always

significantly correlated with User Satisfaction. Regarding management styles, the

researchers found that management style should be adapted to the IS stage. At the initiation

stage, people-oriented management style has a connection with user involvement, but not

with User Satisfaction. At the development, both people-oriented and task-oriented styles

are related to user participation and user satisfaction. At the maturity stage, management

styles have no connection to user involvement, but have significant correlation with user

satisfaction.

Woodroof and Kasper (1998) integrated three organizational behavior theories of

motivation (equity, expectancy, and needs) with user satisfaction. Their argument is based

on the notion that the satisfaction construct is different from the dissatisfaction construct and

that the process of an information system is not like the outcome of an information system.

Accordingly, the writers proposed including four variables in the DeLone and McLean

model: process user dissatisfaction, outcome user dissatisfaction, process user satisfaction,

and outcome user satisfaction. These four variables, according to the writers, affect

separately and jointly usage and satisfaction in DeLone and McLean model.

Baroudi and colleagues (1986) tested the relationship between user satisfaction and

System Usage. User satisfaction was measured through the use of Bailey and Pearson

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 47: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

31

instrument. The researchers found a positive relationship between the two variables(r=.28);

however, the causal ordering of this relationship could not be identified.

Khalil and Elkordy (1999) investigated the relationship between user satisfaction and

systems usage using a sample of Egyptian banks. To measure user satisfaction, the

researchers used the short version of the User Satisfaction instrument originally developed

by Bailey and Pearson (1983). The researchers tested the reliability of this instrument. The

overall reliability coefficient of instrument was 0.82. This meant that the total score of the

instrument is reliable as a measure of the level of user satisfaction. Moreover, the reliability

coefficients for each of the basic elements in the instrument were calculated using factor

analysis. The reliability coefficients were: relationship with IS staff and systems (0.81),

quality of systems output (0.64), and user’s understanding of systems and user’s

involvement in systems development (0.67). Regarding the relationship between user

satisfaction and usage, the researchers found a positive correlation between the two concepts

(r = .36).

While some studies did identify a positive relationship between usage and user

satisfaction, several studies did not find such relationship (e.g., Schewe, 1976; Cheney &

Dickson, 1982; Srinivasan, 1985). Kim, Suh, and Lee (1998) argued that contingency

variables (task variability and task analyzability) have an effect on usage and a moderating

effect on the relationship between usage and user satisfaction. An empirical study

conducted on several companies in Korea was used to give evidence for the effect of the

contingency variables. In this study, User Satisfaction was measures through six items

adapted from Maish (1979), Ginzberg (1981), Sanders (1984), and Lee and Kim (1992).

The Cronbach’s alpha for the six items was 0.874.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 48: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

32

In an exploratory study, Ang and Soh (1997) examined the relationships between

user satisfaction, job satisfaction, and users’ computer background. The researcher found

that user information satisfaction (UIS) provides a sound indication of job satisfaction;

however, there was no relationship between UIS and computer background.

Palvia and Palvia (1999) investigated the variables that influence user satisfaction in

small businesses. The variables the researchers tested were gender, age, race, education, and

computing skills. Among these variables, gender and age were the only variables that had

significant association with user satisfaction.

Measures of User Satisfaction

User satisfaction is considered one of the most usable measures of information

success. Recently, however, some scholars argued that user satisfaction is not enough to

measure IS success. For example, DeLone (1990, p. 88) stated

User satisfaction alone is not sufficient to adequately capture the full meaning of effectiveness. For one thing, it fails to consider the role user behavior plays in the transformation of inputs to outputs. While IS managers may be interested in effect, senior management and stockholders are likely to be more interested in the performance of the human-computer system as it relates to IS investment and operating expenditures.

The popularity of user satisfaction as a measure of information success has led

researchers to operationalize this dimension in many ways. For example, Ginzberg (1981)

used a single item to assess overall user satisfaction, asking: “All in all, how satisfied are

you with the system?”

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 49: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

33

In a different approach, other researchers have developed multi-item instruments to

assess user satisfaction. For example, Bailey and Pearson’s (1983) instrument focused on

general user satisfaction. The instrument included 14 items that focused on users’

perceptions of the success of the IS. This instrument has also been reduced to eight items

and revalidated by several other researchers (Ives et al., 1983; Baroudi & Orlikowski, 1988;

Iivari & Ervasti, 1994; Mahmood & Becker, 1985/1986). Iivari and Ervasti conducted a

study on one municipal organization with 8000 employees (Oulu City Council). They found

that the user information satisfaction instrument was valid and reliable.

In the same vein, Doll and Torkzadeh (1988) merged ease of use and information

product items to measure the satisfaction of users who directly interact with the computer

using specific applications. Torkzadeh and Doll (1991) and Hendrickson, Glorfeld, and

Cronan (1994) have validated this instrument.

Individual Impact

Individual impact refers to the effect of information on the behavior of the recipient

of the information (DeLone & McLean, 1991). DeLone and McLean indicated that

performance of users of information system and individual impact are closely related.

Improving performance indicates that the information system has a positive impact.

Millman and Hartwick (1987) found that office automation has led to positive effects

on the workplace. The 75 managers utilized for the sample reported that automation led to

improving their effectiveness, as well as the effectiveness of their organization. Similarly,

Bikson, Stasz, and Mankin (1985) studied the impact of automation on individuals’ work.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 50: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

34

These researchers found that the majority of people employed in automated offices felt that

information systems enriched their work.

Marcolin, Munro, and Campbell (1997) investigated the relationships among job

characteristics (feedback, autonomy, task identity, and skill variety), individual traits

(computer anxiety and locus of control), individual beliefs surrounding technology usage

(perceived relative advantage and perceived ease of use), and user ability to employ

information systems. The findings indicated that skill variety, computer anxiety, and

relative advantage of information systems were important in identifying users with higher

and lower abilities. The regression coefficients of these variables ranged from .10 to -.47.

Igbaria and Tan (1997) investigated the implications and consequences of IT

acceptance by examining the relationship between IT acceptance and its impact on

individual users. The research model involved three components: user satisfaction, system

usage, and individual impact. The findings indicated that user satisfaction is an important

factor affecting system usage, and that user satisfaction has the strongest direct effect on

individual impact.

Measures of Individual Impact

Individual impact has been measured in various ways, including decision

effectiveness (Meador, Guyote, & Keen, 1984), user productivity (Rivard & Huff, 1984),

estimate value of the information system (Cerullo, 1980), and estimated dollar value of the

information received (Gallagher, 1974; Keen, 1981).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 51: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Millman and Hartwick (1987) used a questionnaire to assess the impact of office

automation on middle management. Managers were asked whether office automation has

increased, decreased, or had no effect on 15 different aspects of these managers’ job and

work (e.g., importance of job, amount of work required on the job, accuracy demand on the

job, skill needed on the job, interesting job).

Doll and Torkzadeh (1998) used 12 items as part of their multidimensional measure

to test the impact of IT on task productivity, task innovation, customer satisfaction, and

management control. Torkzadeh and Doll (1999) further validated the same 12 items for the

purpose of developing an instrument for measuring the impact of information technology on

work. The reliability scores were 0.93,0.95,0.96, and 0.93 for task productivity, task

innovation, customer satisfaction and management control, respectively. The overall

reliability for the instrument was 0.92.

Organizational Impact

Organizational impact refers to the influence of information systems on the overall

organizational performance. Unlike other dimensions, a number of authors who have

studied this dimension extended the measurement of the effects of information systems to

include not only the effects on such internal organizational variables as the effectiveness of

decision making (Lucas, 1981) but on the effects on variables outside the organizational

boundaries, such as relationships with suppliers (Mahmood & Soon, 1991).

Cron and Sobol (1983) and Bender (1986) are examples of researchers who have

focused on the internal effects o f information systems. These researchers focused on the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 52: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

36

overall organizational expenses and how they related to information systems. The

researchers found that companies that lease information systems tended to have higher

organizational expenses.

Mahmood and Soon (1991) attempted to develop a comprehensive model to measure

the effects of information system on organizations through integrating several internal

organizational variables and external variables. The variables included in the model were:

new entrants, entry barriers, buyers and consumers, competitive rivalry, suppliers, search

cost and switching costs, products and services, economics of production, internal

organizational efficiency, inter-organizational efficiency, and pricing.

Measures of Organizational Impact

The type of variables that each researcher focused on influenced how researchers

measured the impact of information systems on an organization. Researchers who focused

on internal variables used financial measures such as return on investment (Vasarhelyi,

1981) and cost/benefit analysis (Johnston & Vitale, 1988). Other researchers included non-

financial measures. For example, Jenster (1987) examined productivity, innovations, and

product quality.

Mahmood and Soon (1991) developed a measure to assess the impact of information

systems on several of the strategic variables mentioned in the previous section. The

instrument included 50 items beginning with the phrase, “To what extent do you think

information technology...”, measured on a 5-point Likert-type scale from 1 (no extent) to 5

(very great extent). Sabherwal (1999) developed and tested a measure of the impact of

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 53: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

37

information systems on overall organizational performance. His measure consisted of five

items measuring the impact of information system in areas such as cost reduction,

improvement of the organization’s image, and customer satisfaction. The Cronbach’s alpha

for this measure was 0.84. The inter-rater reliability measure was also tested and supported.

Using Van de Ven and Ferry’s (1980) criteria of perceived unit performance, Iivari

and Ervasti (1994) developed and tested a measure for the impact of information technology

on organizations. The measure assesses the impact on quantity of output, quality of output,

innovations, reputation for excellence, and morale.

Integrated Models of Information System Success

In contrast to researchers in the second section, researchers in this section attempted

to develop comprehensive models for information system success based on the studies cited

in the second section. Using these models, researchers have attempted to clearly identify

information system success dimensions, the relationships between these dimensions, and the

relationships between these dimensions and other organizational variables.

DeLone and McLean (1992) were among the first researchers to develop a

comprehensive model for IS success. They did this by first conducting a comprehensive

review of different information system success measures; then they developed a scheme for

classifying all IS success measures. This scheme included six categories (or dimensions):

1. System Quality

2. Information Quality

3. Information Use

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 54: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

38

4. User Satisfaction

5. Individual Impact

6. Organizational Impact.

Next, the researchers developed a model for IS success. DeLone and McLean

argued that this model “recognizes success as a process construct which must include both

temporal and causal influence in determining I/S success” (p. 83).

As illustrated in Figure 1, DeLone and McLean arranged the six information system

success categories (dimensions) listed above to suggest two things: (1) the interdependence

between these dimensions; and (2) the time sequence or causal ordering of these dimensions.

The DeLone/McLean model proposes that SYSTEM QUALITY and INFORMATION

QUALITY singularly and jointly affect both SYSTEM USE and USER SATISFACTION.

Additionally, the amount of SYSTEM USE can affect the degree of USER

SATISFACTION - positively or negatively - and the degree of USER SATISFACTION

also affects SYSTEM USE. SYSTEM USE and USER SATISFACTION are direct

antecedents of INDIVIDUAL IMPACT. Lastly, this IMPACT on individual performance

should eventually have some ORGANIZATIONAL IMPACT (DeLone & McLean, 1992, p.

82, 87).

In order to effectively use this model, the researchers suggested two things. One is

to systematically combine individual measures from the information systems success

categories (dimensions) for the purpose of creating a comprehensive measurement

instrument. Second, contingency variables (such as structure, size, and environment of

organization) should be taken into account when selecting an information system measure of

success.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 55: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

DeLone and McLean called for further development and validation of their model.

This call motivated many researchers to test, expand, and modify DeLone and McLean’s

model. In fact, most o f the studies that attempted to develop a comprehensive model or

partial model for information system success were based on DeLone and McLean’s model.

Myers, Kappelman, and Prybutok (1997) note that DeLone and McLean’s model is

the most comprehensive IS assessment model offered by existing IS research. However, as

noted earlier in the chapter, the relationship between an IS and its external environment is

not conceptually included in the model.

Seddon and Kiew (1994) tested part of DeLone and McLean’s model. The

researchers proposed the causal paths among the six variables in the model as illustrated in

Figure 2. The researchers tested the relationships among the four variables in the box after

replacing use with usefulness and adding a new variable called “user involvement.” They

found support for the relationships between the specified variables. The correlation analysis

in Seddon and Kiew’s study indicated that the four variables are directly associated, with

Pearson correlation coefficients ranging from .5468 to .7302.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 56: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

copyright ow

ner. Further

reproduction prohibited

without

permission.

Individualim pact

SystemQuality

InformationQuality

Use

UserSatisfaction

OrganizationalImpact

Figure 2. Seddon and Kiew model of information system successSource: Seddon and Kiew,1994.

o

Page 57: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

41

Glorfeld (1994) represented the relationships among the variables in DeLone and

McLean’s model as:

IT effectiveness = /(SQ,IQ, SU, II, 01)

SU = /(SO,IO,US)

US = / (SQ,IO, SU)

11= /(S U ,U S )

01 = /(I I )

After combining User Satisfaction, System Quality, and Information Quality into one

variable called “satisfaction,” Glorfeld tested the model (see Figure 3). His findings

supported the relationships among the variables except the relationship between individual

impact and organizational impact. There was a significant negative relationship between

these two variables. Glorfeld argued that is could be due to the small sample size or to the

composition of the sample.

Garrity and Sanders (1998) extended the user satisfaction variable in DeLone and

McLean’s model, proposing that task support satisfaction, quality of work life satisfaction,

interface satisfaction, and decision-making satisfaction are the constructs that underline any

measurement of user satisfaction.

Seddon (1997) modified and extended DeLone and McLean’s model by discussing

more deeply the meaning of information system use and adding four new variables

(Expectations, Consequences, Perceived Usefulness, and Net Benefits to Society) to the

model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 58: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

SystemUsage

ITEffectiveness

Individual Impact

rganizationan

Satisfaction

Figure 3. Glorfeld Four Variables Model of Information System SuccessSource: Glorfeld,!994.

Page 59: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced

with perm

ission of the

copyright ow

ner. Further

reproduction prohibited

without

permission.

f t o t k l b ita T lo ra l m o d rl o f IS U se

Expectation about the net benefits of

future IS use

IS Use Individual. Qtganiiaiirmal. end Societal CgnifiaiiGBGfiS o f IS Use (eot evsleeied es ekher §ood er bed)

1| Observation.| Personal Experience, mod I Rqxnts from Others

Peedback (Partial basis far revised expectations)

1. Measures of XfifolDBtfiott A System Quality

SystemQuality

Infnrmatinn

Quality

XOtMVll flHCCplttd Measures o f Net Benefits o f IS Use

PerceivedUseftilness

UserSatisfaction

3. Other Measures of Net Benefits of IS UseNet benefits to:Individuals

| Organisations |

Society ]e* .. Volitiooal IS lire

IS Si Modal

Kmy:Rectangular boxes IS Success modelRounded boxes Partial behavioral model o f IS UseSolid-line arrows Independent (necessary and sufficient) causalityDotted-line arrow T»nn»m-> (not causal, since observer’s goals are unknown)

Figure 4. Seddon model of information system successSource: Seddon,!997.

4*U>

Page 60: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

44

As illustrated in Figure 4, Seddon’s model addressed the external effect of

information systems represented by societal consequence. This effect is not a measure of IS

success, but rather is a description of an outcome attributed to IS use. Moreover, the

relationship between the consequences and the IS success model is an influence, not a cause.

This means that in this model, the effect o f the external environment on information systems

is very weak.

In one of his recommendations, Seddon drew the attention to the multiple people

who evaluate information systems and how measures should reflect this characteristic.

Researchers need to think carefully about who is to be asked to do the evaluation, and what those peoples’ interests are in the outcomes of the evaluation process. Subjects and measures should then be chosen accordingly (Seddon, 1997, p. 252).

Ishman (1998) developed an instrument for measuring three variables of DeLone and

McLean’s model: system quality, information quality, and user satisfaction. The main goal

of this study was to develop an instrument that could be applied across-cultural

environments. The study’s population included individuals from five countries: Mexico,

The People’s Republic of China, the United States, Latvia, and the English and French

speaking geographic regions of Canada. Ishman’s instrument was based on the works of

Baroudi and Orlikowski (1988), Joshi (1990), and Kappelman (1990). Of this instrument,

eight items measured information quality and system quality while one item measured user

satisfaction. The instrument was tested and validated using Straub’s (1989) and Churchill’s

(1979,1996) recommended approaches to validated instrument in the MIS field.

Ballantine, Bonner, Levy, Martin, Munro, and Powell (1997) argued that DeLone

and McLean’s model was not complete and they attempted to build a model that could

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 61: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

45

overcome the perceived weaknesses in DeLone and McLean’s model. They called their

model “The 3-D Model Of Information Systems Success.” In this model, they took into

account external factors, based on their belief that, “Information gained from systems is

more likely to be used in the wider context of supporting value chain activities and more

open management than for purely internal consumption (Ballantine et al., p. 10).

The 3-D model includes three levels and three filters between the three levels. First,

there is the development level, which includes variables such as user involvement and

system type. Next is the deployment level, which includes variables such as user

satisfaction, user skills, and task impact. Last is the delivery level, which includes variables

such as use of output, benefits management, and support of champion. Between these levels

are three filters that affect the three levels. The implementation filter is between the

development and deployment levels. The integration filter is between the deployment and

delivery levels. Finally, there is the environment filter, which comes after the delivery level.

The researchers argued that information system success is influenced by factors that

exist in the environment, such as competitor movement and political, social, and economic

factors. These factors are not in the control of the organization. The researchers explained

that the environmental filter was added to the model because it has implications for

measurement of success. For example, the ability of an information system to reach its

organizational goals could be hindered by factors outside the organization.

No follow up conceptual or empirical studies have been conducted to extend or

validate the 3-D Model of information systems success. This may be due to the complexity

of this model, which makes empirical testing very difficult.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 62: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

46

Literature Abstract and Assessment

The literature review chapter contained three sections. The following is an abstract

and assessment of the three sections.

The External Environment and Information Systems

The first section of the literature review dealt with the relationship between the

external environment and information systems within public organizations, including several

studies that addressed the implications of organizational dependency on the external

environment on the evaluation of information systems in public organizations (see Bozeman

& Straussman, 1990; Newcomer, 1991). Studies that investigated the relationships between

information systems in public organizations and the external environment have concluded

that there is very close interdependency between information systems in public

organizations and the external environment (see Stevens & McGowan, 1985; Bozeman &

Bretschneider, 1986).

Several researchers have empirically tested the interdependency between information

systems in public organizations and the external environment. The findings of these studies

indicate that information systems in public organizations are more dependent on the external

environment than those in private organizations (see Bretschneider & Wittmer, 1993;

Bretschneider, 1990). Some researchers argue that failure to recognize this interdependency

could lead to catastrophic results (Bretschneider & Wittmer, 1993).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 63: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

47

Mansour and Watson’s (1980) empirical study tested the applicability of the private

sector IS models on the public sector. These researchers concluded that several external

variables in private sector IS models (e.g., amount of competition, variety of products

offered by the organization, the frequency with which the organization offers new products,

etc.) are not applicable to public sector organizations because public organizations function

in a different environment that the one faced by private sector organizations.

Some researchers have investigated the implications of the dependency on the

external environment on the evaluation of information systems in public organizations.

These writers argued that evaluation of information systems in public organizations must be

extended to include those actors in the external environment who can influence these

systems (see Bozeman & Straussman, 1990; Newcomer, 1991).

Many measures were offered to evaluate information system in public organizations.

These measures include accuracy, applicability, timeliness, User Satisfaction, attitude of

both managers and users (Stevens and McGowan, 1985); timely and accurate response to

external requests (Bozeman and Bretschneider, 1986); usefulness and reliability, ease of use,

time saving, user acceptance, meeting legislative requirements (Newcomer, 1991); and

public official satisfaction (Bozeman & Straussman, 1990).

Studies of Information System Success

DeLone and McLean’s taxonomy (1992) was used to organize the studies in this

section. Their taxonomy includes six dimensions of information systems success (system

quality, information quality, system use, user satisfaction, individual impact, and

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 64: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

48

organizational impact). Under each of these dimensions, many studies were discussed in

terms of what variables were included and how the dimension was measured.

In terms of the system quality variable, several studies found a relationship between

system quality and user involvement. Many researchers have developed instruments to

measure system quality, some of which are in wide use because these instruments have

demonstrated that they are reliable and valid through several studies (see Doll & Torkzadeh,

1988; Bailey & Pearson, 1983). Moreover, system acceptance was found to relate to ease of

use and usefulness. User satisfaction and system usage were not found to be indicators of

system quality.

Many variables were found to relate to the Information Quality variable (e.g., user

participation in the system design, cognitive style, and external sources). As with the

preceding variable, information quality was measured using both single-item and multi-item

scales.

Under the system use variable, individual perceptions, user involvement, situational

variables, ease of use of the system, degree of social influence exerted by supervisors,

perceptions of the social presence of the system, and user satisfaction were found to relate to

system use (e.g., King & Rodriguez, 1978; Kim & Lee, 1986; Lucas, 1975b; Baroudi et al.,

1986). System use was measured through actual daily use of the computer, frequency of

use, the number o f software applications used by the participants, the number of tasks the

system is used for (Igbaria, 1992; Anakwe, Anandaeajan, & Igbaria, 1998), how many times

the system was used and willingness of use (Kim & Lee, 1986), and how extensively

information technology is utilized in an organizational context for decision support, work

integration, and customer service functions (Doll & Torkzadeh, 1998).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 65: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

49

Under the user satisfaction variable, organizational maturity, management style,

contingency variables, users’ expectations, and usage were found to relate to user

satisfaction (see Mahmood & Becker, 1985/1986; Ginzberg, 1981; Lu & Wang, 1997;

Khalil & Elkordy, 1999). The popularity of using this variable to measure information

system success motivated many researchers to develop measures of this variable. Among

the most used were the measurements developed by Bailey and Pearson (1983) and Doll and

Torkzadek (1988).

With regard to the individual impact and organizational impact variables, there were

mixed results concerning whether the information system had a positive or a negative effect

on individuals or on organizations. Measuring this effect proved to be very difficult; few

studies have attempted to do so (see Millman & Hartwick, 1987; Doll & Torkzadeh, 1998;

Torkzadeh & Doll, 1999; Sabherwal, 1999; Iivari & Ervasti, 1994; Mahmood & Soon,

1991). Again, the majority of the studies reviewed in this part of the literature review were

conducted in the private sector and mainly focused on the internal environment.

Integrated Models of Information System Success

This section of the literature review dealt with studies that have attempted to

integrate studies in the second section into comprehensive models of information system

success. Most of models in this part were based on DeLone and McLean’s model.

Several researchers added new variables (Seddon, 1997; Seddon & Kiew, 1994),

combined existing variables (Glorfeld, 1994), or changed the causal paths (Seddon & Kiew,

1994; Glorfeld, 1994) in the DeLone and McLean model. Some studies identified

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 66: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

50

conflicting results regarding relationships among DeLone and McLean’s six variables. For

example, Glorfeld (1994) found a positive relationship between user satisfaction and

individual impact. Teo and Wong (1998), however, did not find a relationship between the

same variables. Several researchers argued that may be due to the small sample size or to

the composition of the sample (Glorfeld, 1994) and differences in the measurement of

concepts involved (Teo & Wong, 1998). All of these studies were conducted in the private

sector. Furthermore, there hasn’t been any study attempting to validate the whole DeLone

and McLean model in its original form.

Most of the models in this part did not include the external environment as a core

dimension of information system success, or include variables in the external environment

that are not relevant to public organizations. Even those models that incorporated external

variables were theoretically complex and difficult to test empirically. As such, no study has

attempted to build on these models or empirically test them.

From the preceding three bodies of literature, it appears that there is a need to

develop a comprehensive model for assessing information systems in public organizations.

Unfortunately, there have been few studies (empirical or conceptual) conducted on public

organizations that could be used as a base for building a comprehensive model for assessing

information systems in the public sector. However, models developed to assess information

system effectiveness in private organizations can be modified for use in public

organizations. Of the available concepts, DeLone and McLean’s (1992) model is the most

appropriate for using as the basic building block for developing a comprehensive model for

assessing information systems in the public sector. Since it is considered the most

comprehensive information system assessment model available in the information system

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 67: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

51

literature (Myers, Kappelman, & Prubutok, 1997). As such, DeLone and McLean4 s model

has gained wide acceptance among information system researchers who attempted to test

and validate the usefulness of different parts of this model (e.g., Seddon & Kiew, 1994;

Glorfeld, 1994; Igbaria & Tan, 1997; Seddon, 1997; Teo & Wong, 1998, Garrity & Sanders,

1998). This suggests that DeLone and McLean’s model has gained strong theoretical and

some empirical support as a unified model for assessing information system success in the

public sector information system literature.

The present study will use DeLone and McLean’s model as the foundation for

building a comprehensive model for evaluating information systems in the public sector.

The logical steps in the process of the development of a comprehensive model for

information system success are, first, to test the applicability of DeLone and McLean’s

model in the public sector. This step is essential due to the fact that studies that have

investigated information systems in the public sector and those that have specifically

investigated the evaluation of information system within public organization did not either

test the applicability of the DeLone and McLean model as a whole or any of the six

variables and relationships proposed in this model. Thus, testing the applicability of

DeLone and McLean’s model before using it as the foundation for building a comprehensive

model for evaluating information system in the public sector is a must. Once the model has

been tested and validated for use, the external environment variables can be added to the

model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 68: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Chapter 3

52

RESEARCH METHODOLOGY

This chapter presents the research methodology in eight sections. The first section

describes the steps that were taken to develop a comprehensive model for evaluating

information systems in public organizations, the model tested in this study, and the research

question and hypothesis. The second section describes how measurements used in this study

were operationalized. The third section describes the population and sample. The fourth

section describes how the data were collected. The fifth section describes the translation and

pilot study. The sixth section describes the data screening and first round of reliability

analysis. The seventh section presents an overview of the data analysis plan. The eighth

section presents the limitations of this study.

Model Formulation

Figure 5 illustrates the comprehensive model (the Seven-Dimension model)

proposed in the study. Several steps were taken to develop this model. First, DeLone and

McLean’s (1992) model of IS success was used as the foundation for the Seven-Dimension

Model. Parts of DeLone and McLean’s model have been evaluated in several empirical

studies (Igbaria & Tan, 1997; Teo & Wong, 1998; Glorfeld, 1994; Seddon & Kiew, 1994).

These studies validated the relationships among the seven variables proposed by DeLone

and McLean in their model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 69: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

53

The empirical findings concerning the relationships among the variables of DeLone

and McLean’s model further supports the inclusion of these relationships into the model

proposed by this study as a second step.

The third step is the incorporation of three frames in the model. One is called the

general environment, the second is called task environment, and the third is called

organizational boundary. The concept for these frames was adopted from several studies in

the organization theory literature, the information system literature, and the public

management information system literature.

In the organization theory literature, Thompson (1967) elaborated on the concept of

external environment, asserting that there is a part of the external environment that is most

relevant to an organization called the task environment. Thompson (p. 27) defines the task

environment as “those parts of the environment which are relevant or potentially relevant to

goal setting and goal attainment.” This includes, for example, suppliers of raw materials

that represent the input for the organization and customers that buy the organization’s

products or services, regulatory agencies, and other organization that directly affect the

operations of the organization . Thus, Thompson indicates that conceptually there are two

types of external environment: the task environment and the residual general environment.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 70: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced

with perm

ission of the

copyright ow

ner. Further

reproduction prohibited

without

permission.

r G e n e r a l . E n v i r o n m e n t . _________________________________________________________________________

; Task Environment____________________ ____________• Organizational Boundary

SystemUse

SystemQuality

IndividualImpact

Organizational . Impact

UserSatisfaction

Information] Quality i

Figure 5. A Comprehensive Model for Evaluating IS in Public Organizations

Page 71: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

55

Hall (1972) and Miles (1980) also proposed two types of external environment. Hall

mentioned two types of environmental conditions: general conditions (those conditions of

concern to all organizations, such as the economy and demographic changes) and specific

environmental conditions (specific environmental influences on the organization, such as

other organizations with which it interacts or particular individuals who are crucial to it).

Hall noted that interactions in the specific environment are direct, while the general

environment “is not a concrete entity in interaction, but rather comprises conditions that

must be grappled with” (p. 298).

Miles (1980) agreed with the concepts of general environment and specific

environment. Miles includes those conditions that are important for the whole classes of

organizations (e.g., technological conditions, legal conditions, political conditions, etc) in

the general environment, asserting that these conditions are potentially relevant for an

organization, but do not have day-to-day interaction within the organization. Miles explains

that the general environment has an impact on both the organization and its specific

environment. On the other hand, Miles notes that conditions in the specific environment

have immediate relevance with the organization and have direct interaction with the

organization. This is equivalent to Thompson’s concept of task environment.

In the information system literature, Ives and Davis (1980) proposed a model for IS

research using two information system environments: the external environment and the

organizational environment. Ives and Davis defined external environment as including

legal, social, political, cultural, economic, educational, resource, and industry/trade

considerations. Variables in the external environment can affect information systems within

organizations through the resources and constraints that these variables can impose or offer.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 72: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

56

For example, legislative budgetary requirements could impose constraints on the resources

available for IS development.

According to Ives and Davis, the organizational environment is marked by the

organizational goals, tasks, structure, volatility, and management philosophy/style. These

variables can affect IS development and management. For example, the centralization or

decentralization of the organizational structure can affect on how information is developed

and managed.

In the public management information system literature, Bozeman and Bretschneider

(1986) proposed frames similar to those proposed by Ives and Davis (1980). Bozeman and

Bretschneider (1986) maintain that the frame for public management information system

research consists of three levels: society, organization, and individual. The society level

includes environmental variables that “define resources and constraints on MIS”; the

organizational level includes variables within the organizational context that affect

information system such “size, structure, time frame, organizational resources, and

organizational maturity”; and the individual context “reflects characteristics of individual

actors within an organization, including cognitive style, level of satisfaction with MIS, and

other such personal and demographic” information (pp. 475-478).

Bozeman and Bretschneider (1986) further elaborated on their frame for public

management information systems by combining the previous variables into four models of

publicness and proposing two types of environment. The environmental variables were

included in two models (economic authority model and political authority model), which

includes the unique economic and political characteristics of public organizations. The

organizational variables were included in a third model (work context model) and the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 73: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

57

individual variables were included in a fourth model (personnel and personnel system

model).

Bozeman and Bretschneider (1986) contend that the four models are located in two

types of environment: the economic authority model and the political authority model are

located in the distal environment, and the work context model and the personnel and

personnel system model are located in the proximate environment. Bozeman and

Bretschneider (pp. 480-481) stated:

[T]he models are interrelated because they stand in hierarchical relation. The Political Authority and Economic Authority models comprise the distal environment and introduce constraints which are broad and sweeping (e.g., market failures, public interest) and these remote factors of the distal environment can be viewed as directly influencing the "proximate" environment (i.e., the Work Context Model), which in turn directly influences the attitudes and behaviors of individuals in organizations (e.g., the Personal Model).

Thus, we have explained that there are three types of environment that an

information system lives in. The first environment includes variables that exist within the

organization, the second environment includes external variables that have immediate

relevance and direct interactions with the organization, and the third environment includes

external variables that have potential relevance and do not have direct interaction with the

organization. Although different researchers have different names for these types of

environments, the different terms ultimately mean the same types of environment.1

1 It is interesting to note that some researchers have used the term ‘organizational environment’ to refer to conditions that exist within the organization. From a systems theory viewpoint, the term organizational environment denotes everything that exists outside the organizational boundaries. This situation had led to some confusion regarding the variables that exist within this type of environment In order to prevent further confusion regarding the variables that exist within each frame, the inner frame in the model in Figure 5 will be called ‘organizational boundary.’

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 74: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

58

Thus, the three environments will be incorporated in DeLone and McLean’s model

as three frames within each other. The organizational boundary frame includes all internal

variables that exist within the organizational boundaries. The middle frame, task

environment, includes those external variables that have immediate relevance and direct

interactions with the organization. The outer frame, general environment, includes those

external variables that have potential relevance and do not have direct interaction with the

organization. These titles were chosen because of their familiarity in the literature.

As step four in the process of developing the Seven-Dimension model, a seventh

dimension was added to DeLone and McLean’s model. Based on the first section of the

literature review, this seventh dimension is called External Environment Satisfaction (EES).

EES denotes the satisfaction of external stakeholders that use an information system or its

outputs, and could directly or indirectly influence the information system. This influence

could be, for example, through many of the constraints that could be imposed on public

organizations from the external environment (i.e., legal and budgetary constraints).

Figure 5 represents the final product after finishing all the steps. The causal paths

among the seven dimensions in the model are represented mathematically as:

01 = / (SQ, IQ, SU, US, IM, EES, OB) (3A.1)

IM = / (SU, US, EES, OB) (3.A.2)

SU= / (SQ, IQ, US, EES, OB) (3.A.3)

US= / (SQ, IQ, SU, EES, OB) (3.A.4)

SQ= / (OB, EES) (3.A.5)

IQ= / (OB, EES) (3.A.6)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 75: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

59

where SQ, IQ, SU, US, IM, and 01 represent System Quality, Information Quality, System

Use, User Satisfaction, Individual Impact, and Organizational Impact. EES represents the

External Environment Satisfaction. OB represents the effects of factors within the

Organizational Boundary that affect the previous six variables, such as size of the

organization and control of the information system (centralized vs. decentralized). The

model operationalization section presents the definitions and how these variables are

measured.

Equation 3.A.1 suggests that Organizational Impact is determined directly by

Individual Impact and indirectly by the rest of the variables in the model through affecting

Individual Impact; moreover, factors within the Organizational Boundary and External

Environment Satisfaction determine Organizational Impact.

Equation 3.A.2 suggests that Individual Impact is determined directly by System Use

and User Satisfaction; and indirectly by System Quality and Information Quality through

affecting System Use and User Satisfaction. Moreover, factors within the Organizational

Boundary and External Environment Satisfaction determine Individual Impact.

Equation 3.A.3 suggests that System Quality, Information Quality, User Satisfaction,

factors within the Organizational Boundary, and External Environment Satisfaction

determine System Use.

Equation 3.A.4 suggests that System Quality, Information Quality, System Use,

External Environment Satisfaction, and factors within the Organizational Boundary

determine User Satisfaction.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 76: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

60

Equation 3.A.5 suggests that factors within the Organizational Boundary and

External Environment Satisfaction determine System Quality.

Equation 3.A.6 suggests that factors within the Organizational Boundary and

External Environment Satisfaction determine Information Quality.

Model to be Tested

As indicated in the previous chapter, most of the studies on information system

success and all attempts to develop models of information system success were conducted in

the private sector. Moreover, the empirical investigations were limited to parts of DeLone

and McLean’s (1992) model; a complete investigation of the usefulness of the complete

model was never conducted.

Consequently, this current study contributes to the body of knowledge concerning

evaluating information systems by conducting a complete assessment of the usefulness of

the DeLone and McLean model, both in general and in the context of the public sector.

From the point of view of developing a comprehensive model in this study to evaluate

information systems in the public sector, this assessment is a critical first step before using

the DeLone and McLean model as the basic building block for developing this model.

A thorough assessment of a multidimensional model such as DeLone and McLean’s

model would necessitate a lengthy, intensive project. In order to facilitate this study and

keep it manageable, however, this study will focus on testing the usefulness of the DeLone

and McLean model in the public sector as the first step in building the seven-dimension

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 77: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

61

model. The incorporation of the external environment satisfaction dimension will be left for

future research.

Figure 6 illustrates the model to be empirically tested in this study. The relationships

in this model are:

01 = / (SQ, IQ, SU, US, IM) (3.B.1)

IM = / (SU, US) (3.B.2)

SU= / (SQ, IQ, US) (3.B.3)

US= / (SQ, IQ, SU) (3.B.4)

Research Question and Hypothesis

This study is being conducted to answer the following question: To what extent is

DeLone and McLean’s (1992) model useful in evaluating information systems in the public

sector? In order to answer this research question, an empirical study will be conducted to

test the relationships among the variables in DeLone and McLean’s model. In the proposed

study, the following null hypothesis will be tested:

HO: The relationships indicated in the 3.B equations do not exist.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 78: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced

with perm

ission of the

copyright ow

ner. Further

reproduction prohibited

without

permission.

SystemUse

SystemQuality

IndividualImpact

Organizational . Impact

UserSatisfaction

Information] Quality j

Figure 6. Model to be Tested in This StudySource: DeLone and McLean (1992)

Page 79: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Model Operationalization

63

In order to empirically test DeLone and McLean’s model, all variables in the model

must be operationalized. Existing measures of information system success that have

acceptable psychometric qualities will be used. Questionnaires have been developed for this

purpose, taking into account the necessary cultural factors. Appendix A includes the users’

questionnaire and Appendix B includes the supervisors, heads of departments, and general

managers’ questionnaire.

System Quality and Information Quality

Items from Bailey and Pearson (1983) will be used to operationalize System Quality

and Information Quality. System quality is “concern with whether or not there are bugs in

the systems, the consistency of the user interface, ease of use, responses rates in interactive

systems, documentation, and, sometimes, quality and maintainability of the program code”

(Seddon & Kiew, 1994, p. 101). Seven items were used to operationalize the System Quality

variable.

Information Quality is “concern with such issues as timeliness, accuracy, relevance,

and format of information generated by an information system” (Seddon & Kiew, 1994,

p. 101). Nine items were used to operationalize the Information Quality dimension. Bailey

and Pearson’s instrument is widely accepted, has been tested for reliability and validity by

several researchers (Ives et al., 1983; Baroudi & Orlikowski, 1988; Iivari & Ervasti, 1994;

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 80: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

64

Mahmood & Becker, 1985/1986; Li, 1997; Khalil & Elkorody, 1997), and has become a

standard instrument in the MIS field.

System Use

System usage examines the actual use of information system, the extent of use of

information system in the users’ jobs, and the numbers of information system packages used

in the users’ jobs. Igbaria, Pavri, and Huff (1989) developed a four-item scale to measure

this variable, and the instrument has been proven reliable and valid (Igbaria, 1990,1992;

Anakwe, Anandaeajan, & Igbaria, 1998). This scale will be used to operationalize System

Use in this study.

User Satisfaction

User Satisfaction examines the successful interaction between the information

system itself and its users (Glorfeld, 1994). Seddon and Yip’s (1992) instrument will be

used to operationalize the User Satisfaction variable. The instrument consists of four

questions. In the context of their study, Seddon and Kiew (1994) tested the reliability of thei

instrument. The reliability coefficient (alpha) was 0.91.

Individual Impact

Individual Impact examines the effect of information system on the users’

performance. Doll and Torkzadeh’s (1998) 12-item instrument will be used to

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 81: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

65

operationalize the Individual Impact variable. Torkzadeh and Doll (1999) further validated

the same 12 items for the purpose of developing an instrument for measuring the impact of

information technology on work. The instrument measures the impact on four work aspects

(task productivity, task innovation, customer satisfaction, and management control). The

reliability scores were 0.93,0.95,0.96 and 0.93 for task productivity, task innovation,

customer satisfaction and management control, respectively. The overall reliability for the

instrument was 0.92.

Organizational Impact

Organizational impact examines the influence o f the information system on overall

organizational performance. SabherwaFs (1999) instrument will be used to operationalize

the Organizational Impact variable. This measure was chosen because it measures the

impact of information system in areas that are highly important to all types of organizations,

particularly public organizations. These areas include reduction of administrative costs,

improvement of organization image, and enhancement of internal operations and customer

satisfaction. New trends in public management, such as the reinvention movement, push

toward increasing public organizations’ efficiency and effectiveness in these areas. As such,

assessing the impact of information system on these areas is highly needed and relevant to

the nature of public organizations.

There are five items measuring the impact of information systems in these areas.

The Cronbach’s alpha for this measure was 0.84. This alpha level is above the 0.70 alpha

level accepted as appropriate by most information system researchers (Yoon and Guimaraes,

1995).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 82: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Population and Sample

66

The main focus of this study is to assess the usefulness of DeLone and McLean’s

model in evaluating information systems in public organizations. Thus, the unit of analysis

is the information system function in a public organization. Information system function is

defined as “all IS groups and departments within the organization” (Saunders & Jones, 1991,

p. 2).

The study was conducted in the State of Kuwait. The State of Kuwait is a

constitutional monarchy that has been ruled by the al-Sabah family since the mid-18th

century. It is located in Middle East, at north end of the Arabian (Persian) Gulf. It is

bordered on the north by Iraq and Saudi Arabia on the south. The national language is

Arabic and Islam is the state religion. The State of Kuwait has a total area of 17,818 square

km (6,880 miles) and its population is estimated to be 2,031,000 in the year of

20001 (Anonymous, 2001). The public sector in the State of Kuwait consists of ministries,

partially owned independent organizations, and fully owned independent organizations.

In Kuwait, the term “public organization” is usually used to denote the 18

government ministries. Thus, the population in the context of this study will be the

information system end users in the 18 ministries in the State of Kuwait.

The researcher used a simple random (lottery) method to choose 6 of the 18 Kuwaiti

ministries for participation in this study: Ministry of Interior, Ministry of Communications,

Ministry of Treasury, Ministry of Electricity and Water, Ministry of Social Work, and

Ministry of Justice. After identifying the six ministries, the researcher obtained the

necessary approvals to conduct the study. This included getting the approval of the Office

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 83: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

67

of Regulatory Compliance/Human Subjects Committee at The Pennsylvania State

University (Appendix C) and Kuwait University (the researcher’s sponsor). Next letters

from Kuwait University were hand delivered to top management in each of the six ministries

to invite them to participate in the study. During the meetings with ministry management,

the researcher explained the goal and importance of this study.

All six ministries agreed to participate in the study, and the managers provided the

researcher with letters to facilitate his work in their ministries (Appendix D). The researcher

took the letters to the ministries’ human resources departments, which provided the names

of their employees who use information systems in their work as well as a list of department

managers who use information systems in their departments and work.

The following criteria were used to determine the sample’s participants. First,

employees should be directly using an information system application in their work. Second,

Managers should be directly or indirectly (output) using information system.

The researcher selected the study’s subjects from these two lists using a simple

random (lottery) method. A total of 500 employees was chosen, including 350 end users

and 150 managers.

The procedural steps that were followed in conducting the survey instruments are

described in the following sections.

Translation and Pilot Study

The original questionnaires were developed in English. Because Arabic is the

official language in Kuwait, some of the participants may not have had a comprehensive

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 84: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

68

command of the English language. To avoid communication problems, the questionnaires

were translated into Arabic. To ensure that the original meaning, validity, and reliability of

the instruments were retained, Brislin’s (1986) method for translating research instruments

was used in this study. Several researchers have found this method to be highly reliable and

acceptable as a method for translating questionnaires from one language to another without

jeopardizing the integrity of the original questionnaires (e.g., Al-Janee, 1989; Anakwe et al.,

1999; Ishman, 1996).

The translation process involved four steps translating the questionnaires from

English to Arabic (by researcher), then back to English (by another doctoral student), then a

second Arabic version from the English translation, then the second Arabic version was used

to make a second English translation. Then, two Arabic researchers who speak both English

and Arabic compared the two English versions and the two Arabic versions with each other

(Appendix E). All concepts in the original questionnaires were able to retain their meanings

after the translation process. Therefore, the first Arabic version was adopted (Appendices F

andG).

Once the instruments were translated, the researcher consulted with two Kuwaiti

professors in public administration and information systems regarding the suitability of the

questionnaire items to the Kuwaiti employees and workplace. The professors determined

that the two questionnaires were appropriate, although they suggested that the management

questionnaire might not be comprehensive enough to capture all aspects of information

system impact. Consequently, three items from a Mahmood and Soon (1991) instrument

were added that measure the impact of information systems on coordination with other

organizations, communication with other organizations, and improvement in decision-

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 85: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

69

making. After reviewing the modified management questionnaire, the two professors did

not suggest additional changes.

Once the questionnaires were finalized, they were further tested in a pilot study. The

questionnaires were pre-tested in a pilot study in two Kuwaiti ministries (Ministry of

Electricity and Ministry of Communications). The formal approval to conduct the pilot

study in the two ministries was taken as part of the overall approval to conduct the study.

The researcher distributed 20 questionnaires in each of the two ministries, 10

questionnaires for end users and 10 questionnaires for managers. To encourage individuals

to participate in the pilot study, the researcher visited the two ministries and met with the

subjects and their superiors. In these meetings, the researcher explained the goal of the

study and reviewed the questions with them. Moreover, the researcher encouraged the

participants to comment on and discuss any part of the questionnaire they might consider to

be ambiguous. The participants were also encouraged to write down any comments about

any questions that might be unclear.

A total of 35 questionnaires was collected. The researcher reviewed each section of

the questionnaires, including both wording and content. The responses to each question

were evaluated. Overall, the pilot study participants indicated that the questionnaires were

understandable. Most of the participants agreed that the items “the information system helps

me create new ideas,” “the information system helps me come up with new ideas,” and “the

information system helps me try out innovative ideas” have the same meaning and suggested

combining them into one item. Other participants had a few minor wording changes and

clarifications. For example, the term “information system” was not clear enough for many

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 86: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

70

of the participants, so the term “computer” and its Arabic translation were added to the

questionnaires.

After discussing these suggestions with several professors, the researcher modified

the questionnaires. Once the changes were complete, the researcher informally discussed

the second versions of the questionnaires with several participants. These participants

confirmed that the new questionnaires were clearer than the first version and did not suggest

any further changes.

Data Collection Method

The study uses two surveys to collect data. The end user survey collected data about

the Information Quality, System Quality, System Usage, User Satisfaction, and Individual

Impact variables. The management survey collected data about the Organizational Impact

variable and was distributed to employees who are supervisors, department heads, and/or

general managers). The logic behind designing two surveys is that end users interact with

information systems on a daily basis, so they have the necessary knowledge to evaluate

variables that are directly related to information systems and their productivity.

Management, on the other hand, should have the knowledge about the overall performance

of the organization, so employees in this level should be able to evaluate whether

information systems have either a positive or negative effect on overall organizational

performance.

Once the instruments and procedures were approved, the questionnaires were

administered to the employees in their workplace. The administration of the questionnaires

started with an initial contact with the managers of the government units to explain the goal

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 87: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

71

and significance of the research, the importance of their participation, and to set a date and

time for the participants to complete the questionnaires. Next, the researcher went to these

organizations and administered the instruments. At the beginning of each survey

administration session, the researcher thanked the participants for their interest and

cooperation and briefly introduced the goal and significance of the research and the

importance of their participation. Subjects were told that the study represents a doctoral

dissertation attempting to develop a comprehensive model for evaluating information

systems in the public sector. Furthermore, the researcher emphasized that participation in

the survey was completely voluntary and advised the subjects that all responses would be

kept confidential. Finally, participants were instructed that there were no right or wrong

answers and they need only to record their first perceptions after reading each question.

The participants were given all the time they needed to complete the survey.

Envelopes were, also, provided to the participants to insert the completed questionnaires in

them.

Some organizations agreed to return the completed forms to the researcher on the

same day that they were distributed; other organizations requested a week to collect the

completed questionnaires because their employees were extremely busy. Thus, in order to

give the participants all the time they needed to complete the questionnaires and ensure that

the answers on the questionnaires reflect the real perceptions of the participants, the

researcher agreed to come back on the selected day.

In the organizations requiring more than one day to process the survey, the

researcher used a three-step follow-up. First, when the researcher delivered the surveys, he

reminded the participants that his telephone number and e-mail address were on the cover

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 88: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

72

letter (Appendices H and I), and that they should not hesitate to call him anytime if they had

any questions regarding the survey. Second, three days aiter the survey was distributed, the

researcher visited these organizations for the purpose of answering any questions that the

participants might have and to remind them about the day the questionnaires were to be

collected. Third, the day before the questionnaires would be collected the researcher made

telephone calls to the participants. Those that he was able to reach on their office phones

were given the opportunity to ask any questions that they might have and to remind them

when the questionnaires would be collected. In addition, the researcher asked them to share

this message with any of their colleagues who could not be reached by phone.

A total of 500 questionnaires were distributed, 350 questionnaires to end users and

150 to management employees. A total of 390 questionnaires (78%) were returned (see

Table 1); 298 were end users’ questionnaires and 92 were management questionnaires. This

makes the response rate 85% for end users and 61% for management. According to several

public organization managers and Kuwaiti researchers, the lower response rate for the

management questionnaire might be due to the summer holidays and the preoccupation of

managers with the end of the budget cycle, which coincided with the distribution of the

questionnaires.

Of the 390 questionnaires, 27 questionnaires were eliminated because the

participants left one or more of the questions that assessed Information Quality, System

Quality, User Satisfaction, System Usage, Individual Impact, or Organizational Impact

unanswered. This makes the total number of usable questionnaires 363 (73%), of which 278

(79%) were end users and 85 (57%) were management).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 89: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced

with perm

ission of the

copyright ow

ner. Further

reproduction prohibited

without

permission.

Table 1

Questionnaires Distribution and Response for Six Ministries

Name of Total Top management End users Total

departments Distributed Distributed Collected Usable Usable response rale Distributed Collected Usable Usable response rate Collected Usable

Ministry of Communications 83 25 15 13 52% 58 53 49 84.48% 68 62

Ministry of Electricity 83 25 13 12 48% 58 42 39 67.24% 55 51

Ministry of Finance 83 25 14 11 44% 58 48 43 74.13% 62 54

Ministry of Interior 85 25 12 13 52% 60 46 44 73.33% 58 57

Ministry of Justice 83 25 18 17 6 8 % 58 55 51 87.93% 73 68

Ministry of Social Work 83 25 20 19 76% 58 54 52 89.66% 74 71

Total 500 150 92 85 57% 350 298 278 79% 390 363

U>

Page 90: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

74

Data Screening and Reliability of Measurement Instruments

One of the primary goals of data analysis is to verify that the data are accurately

coded and ensure that the responses are valid (Tabachnik & Fidell, 1989). In this research,

several steps have been taken to maximize the reliability of the data. First, the returned

questionnaires were checked for completeness. Incomplete questionnaires (one or more

unanswered questions) were dropped from the data set. Exceptions for this were the

demographic questions, since answers to these questions do not affect the tests involving

any key variables in the study. All acceptable questionnaires were assigned an identification

number.

Second, the data were coded and entered into a computer data file using the SPSS

For Windows (release 10) software package. The researcher did the data entry. Third, the

raw data were checked for entry errors through the use of the FREQUENCY procedure in

SPSS. When errors were found, the data were compared to the original surveys and

codebook, and mistakes were corrected.

Fourth, after checking for errors and cleaning the data, the reliability of the

instruments was checked. Reliability refers to “the accuracy or precision of a measuring

instrument” (Kerlinger, 1986, p. 405). In other words, reliability is the extent to which an

experiment, test, or any measurement procedure yields the same results on repeated trials.

Cronbach’s alpha is the most popular method used to test reliability. The value of alpha

ranges from 0 to 1. When alpha get closer to 1, this implies that the reliability of the

instruments is high.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 91: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

75

Although Cronbach’s alpha (an internal consistency technique) is widely accepted

and used by researchers as a measure o f reliability, there is no agreement on the minimum

acceptable value for reliability using alpha. Nunnally (1976), for example, asserted that .70

is the minimum value of alpha that is acceptable for reliability. Although Price and Mueller

(1986) maintained that, “ ...0.60 is generally viewed as the minimum acceptable level” (p.

6), a minimum acceptable value of 0.70 for alpha was used in this study.

Table 2 shows the alpha coefficients for the measurement instruments. The

reliability of all instruments is within the range of previous studies (see Chapter 2).

Table 2

Reliability of Measurement Instruments

Instrument Reliability2

System Quality .91

Information Quality .85

System Usage .75

User Satisfaction .95

Individual Impact .95

Organizational Impact .89

2 Cronbach’s Alpha

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 92: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

76

Once the errors were identified and the data cleaned, the COMPUTE procedure in

SPSS was used to create the composite variables. The composite variables were created by

averaging the responses included for each variable.

Limitations of the Study

The population for this study includes public organizations in the State of Kuwait. A

sample of 500 participants was drawn from this population using a simple random approach.

Subjects in the sample came from six of the eighteen ministries in the State of Kuwait. The

six ministries were also selected using a simple random approach. Various personal and

professional characteristics were represented in the sample in terms of gender, age,

education, organizational level.

Because of the nature of the sample and the methods used to select the study’s

participants and ministries in terms of demographics, the findings of this study should be

generalizable to the study’s population - that is, government ministries in the State of

Kuwait. Because the focus is limited to Kuwait’s public sector, the external validity of the

study’s findings is limited to ministries in the State of Kuwait. External validity refers to the

generalizability of a research finding to other populations, settings, treatment arrangements,

and measurement arrangement (Kidder & Judd, 1986). Consequently, the findings of this

study may not be generalizable to other types of organizations in the State of Kuwait, or to

other countries.

Another limitation of this study is related to the data collection method. The survey

questionnaire was the only instrument used to collect data from the study’s subjects. Thus, a

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 93: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

77

large part of the reliability o f the collected data depends on the respondents’ attention to

detail when answering the questions. Although the researcher took reasonable precautions to

eliminate any threat to the reliability of the data (e.g., meeting with the respondents before

the questionnaires were distributed, giving the respondents all the time they needed to

complete the questions, and obtaining the advance approval o f upper management), it is

impossible to guarantee the reliability of the data if the survey questionnaire was the only

instrument used to collect data.

General Outline of Plan for Data Analysis

This study used a five-step plan to analyze the data. First, the study used descriptive

statistics to show the distribution of responses. Second, a correlation analysis was used to

examine the strength and direction of the associations among the variables in the study.

Third, a factor analysis was used to check the unity and number of concepts and variables in

the study. After completing factor analysis, a second round of correlation analysis and

reliability analysis were conducted. Fourth, variables found to be significant in the second

round of correlation analysis were included in a regression analysis to examine the strength

of the relationships among the dependent variables and independent variables in the study.

Fifth, variables found to be significant in the regression analysis were included in the path

analysis. The path analysis is used to cover the direct and indirect effects o f the independent

variables on the dependent variables and on other independent variables. As such, it

represents a confirmatory tool for the regression analysis conducted in the fourth step.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 94: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Chapter 4

78

RESEARCH FINDINGS

The findings are presented in eight sections. Section one describes the participant

demographic profile. Section two presents the findings of the first round of correlation

analysis. Section three presents the findings of factor analysis. Section four presents the

findings of the second round of reliability analysis of the measurements used in this study.

Section five presents the findings of the second round of correlation analysis. Section six

presents the findings of the regression analysis. Section seven presents the findings of path

analysis. Section eight presents a comparison between the findings of regression analysis

and the findings of path analysis.

Respondent Characteristics

This section presents the first step in the data analysis plan that is descriptive

statistics of the study’s sample. Tables 3 and 4 present a profile of the survey respondents

with regard to gender, age, education, government career, length of service in current

organization, and knowledge of information systems. The characteristics for both

employees and managers are discussed for each category.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 95: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

79

Age and Education

Most of the employee respondents (92%) were 20-39 years old. The manager

respondents were older (ages 30-50) (84%). Moreover, manager respondents had higher

education levels than those of the employee respondents. Over half of the managers (56%)

had a bachelor degree or higher, while 26% of the employee respondents had the same

education level.

The age and education levels indicate that, as we go higher in the government

hierarchy, the age and education levels increase. However, if we consider that 92% of the

employees and 75% of the managers are between 20 and 39 years of age, we can conclude

that the age differences between the employees and managers in Kuwaiti public

organizations is not very significant. The employee respondents’ education levels are high,

with almost a third of the respondents (32%) having completed high school and a technical

institution (see Table 3), indicating that these employees have at least one year of post­

secondary technical training.

Gender

More than half (56%) of the employee respondents were female, while almost three

quarters (71%) of the manager respondents were male.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 96: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

80

Table 3

Respondent Profile: Personal Characteristics

Characteristic ValueEmployees

Frequency PercentManagers

Frequency PercentAge <20 years 2 0.7 0 0

20 - 29 years 149 52.1 8 10.5

30 - 39 years 114 39.9 49 64.5

40 - 49 years 18 6.3 15 19.7

>50 years 3 1.0 4 5.3

Gender Male 122 43.3 54 71.1

Female 160 56.7 22 28.9

Education <High School 58 20.4 3 4.0

High School 61 21.5 10 13.3

High School andInstitutionBachelors

9271

32.425.0

2037

26.749.3

Masters 0 0 4 5.3

Doctorate 2 0.7 1 1.3

Length of Government Career and years of service in the current organization

As indicated in Table 4, most of the employee respondents (84%) had been

employed with the Kuwaiti government for between 1 and 15 years. As expected, the

manager respondents had longer government careers (75% of the respondents had between 6

and 20 years of service). The manager respondents had also been in their current

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 97: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

81

organizations longer than the employee respondents (69% manager respondents in current

organizations 6-20 years, 86% employee respondents in current organization 1-15 years.

The length of government career and years of service in the current organization

distribution of both the employees and managers indicates that the individuals in the sample

have good experience with information system in their organization that could enable these

individuals to evaluate the different dimensions of information system.

Information Systems Experience

Over half o f the employee respondents (51%) had from 1 to 5 years’ experience

using information systems (see Table 4). The second largest group (24%) had from 6 to 10

years’ experience. Several employees (7%) had 16 to 20 years’ experience, and a few (2%)

had 21-25 years’ experience. Only 16% of the employees had less than one year’s

experience with information systems. The manager respondents had a similar distribution of

experience levels. Almost two thirds (66%) of the manager respondents had between 1 and

10 years’ experience using information systems (Table 4).

These statistics indicate that both the employees and the managers in the sample

have sufficient background in information systems to enable them to evaluate the different

dimensions of information systems.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 98: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

82

Table 4

Respondent Profile: Professional Characteristics

Characteristic ValueEmployee

Frequency Percent

Manager

Frequency PercentLength of <1 year 23 8.6 2 2.9

Government 1 to 5 years 103 38.4 3 4.3

6 to 10 years 84 31.3 12 17.1

11 to 15 years 38 14.2 23 32.9

16 to 20 years 14 5.2 17 24.9

21 to 25 years 6 2.2 7 10.0

>26 years 0 0 6 8.6

Experience < 1 year 39 15.9 6 9.7

With information 1 to 5 years 126 51.2 22 35.5

Systems 6 to 10 years 58 23.6 19 30.6

11 to 15 years 16 6.5 8 12.9

16 to 20 years 6 2.4 5 8.1

21 to 25 years 1 .4 1 1.6

>26 years 0 0 1 1.6

Years of service <1 year 25 8.8 2 2.7

With Current 1 to 5 years 132 46.5 13 17.3

Organization 6 to 10 years 76 26.8 24 32

11 to 15 years 37 13 19 25.3

16 to 20 years 9 3.2 9 12

21 to 25 years 5 1.8 3 5.3

>26 years 0 0 3 5.3

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 99: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

83

Correlation Analysis

This section represents the second step in the data analysis plan that is examining the

strength and direction of associations among the variables in the study. Pearson’s

correlation coefficient was used in this study. Table S shows the results of the correlation

analysis.

Table 5

Pearson’s Correlation Matrix of the Six Variables in the Study

1 2 3 4 5 6

1. Information quality 1.00 .62** .52** .71** .76** .77**

2. Organizational impact 1.00 .57** .80** .74** .81**

3. System usage 1.00 .57** .50** .63**

4. Individual impact 1.00 .67** .83**

S. User satisfaction 1.00 .77*

6. System quality 1.00

N = 287**p<0.01

The Pearson correlation coefficients in Table 5 clearly indicate that there are strong

direct associations among the variables in the study. The largest correlation coefficient is

between System Quality and Individual Impact (r = 0.83) and between System Quality and

Organizational Impact (r = 0.81). The smallest correlation coefficient is between User

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 100: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

84

Satisfaction and System Usage (r = 0.50). The rest of the correlation coefficients are

between.50 and .80. All correlation coefficients are statistically significant at the.01 level.

The high Pearson coefficients in Table 5 are in line with other studies in the

literature. For example, Seddon and Kiew (1994) tested part of DeLone and McLean’s

model. The researchers proposed those causal paths among the six variables of the model as

illustrated in Figure 2. The researchers tested the relationships between the four variables in

the box after replacing use by usefulness and adding a new variable called “user

involvement.” The correlation analysis in Seddon and Kiew study indicated that the four

variables are directly associated, with Pearson correlation coefficients ranging from .55 to

.739. Seddon and Kiew (1994, p. 109) commented on these high correlations by saying

Such high correlation of multi-factor measures and overall satisfaction measures are not uncommon. Bailey and Pearson (1983, p. 536) report a correlation of .79 between their normalized importance-weighted measure of User Satisfaction (based on up to 39 questions) and their single-scale measure of overall.. .satisfaction.

Likewise, in several empirical studies and in the context o f developing a new

measure of usefulness and perceived ease of use, Davis (1989) found that usage is directly

associated with usefulness and perceived ease of use with Pearson correlation coefficients

ranging from .45 and .85.

Nevertheless, the high Pearson correlation coefficients in Table 5 raise serious

concern that there may be two problems among the variables. First, there may be two or

more variables that measure the same concept. In other words, there is concern regarding

the unity and number of concepts and variables in this study. The second problem is multi-

collinearity that exists when there is high correlation among the independent variables.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 101: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

85

Consequently, these two problems have to be investigated before going further in the

data analysis. The first problem will be investigated using factor analysis. The second

problem will be investigated through the use of the VIF measure, which is a specific

measure that is used to test for multicollinearity.

Factor Analysis

This section presents the third step in the data analysis plan. This step checked the

unity and number of concepts and variables in the study. The results are reported here for

the factor analysis that investigated whether multiple variables measured the same concept.

In factor analysis, this is accomplished by examining the loading of each item on the factors

produced by the factor analysis. In the literature, there is no agreement on the cutoff of the

degree of loading to include an item under a specific factor. For example, Churchill (1987)

argued for a cutoff of 0.35 or 0.30. On the other hand, Rencher (1998) argued that a cutoff

of 0.30 is unacceptable. Hair, Anderson, Tatham, and Black (1992) argued that loadings

greater than 0.50 are considered very significant. Because this the first study conducted in

public organizations to evaluating information systems and there are no established

measures in this sector, this study uses 0.60 as the cutoff for item loading and an eigenvalue

of 1.

In factor analysis, when a group of items loads highly on one factor, these items are

considered the items that measure this factor. In some cases, the factors produced and the

items loading perfectly correspond to the variables used and the items used to measure these

variables. However, in other cases, this correspondence does not take place. To solve this

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 102: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

86

problem, the researcher might change the variables he is using and create new variables.

The new variables will be the factors produced by the factor analysis and the items that

loaded highly on it. However, in creating the new variables, statistical reasons should not be

the only rationale. Conceptual considerations should be taken into account (Lich, 1998). In

other words, the researcher has to go back to the literature and see whether the items that

loaded highly on one factor are used, in the literature, to measure similar concepts. If the

answer is yes, then grouping these items is conceptually and statistically correct. However,

if the answer is no then grouping of these items is statistically correct but theoretically

incorrect. In this study, the researcher paid attention to both statistical and theoretical

considerations.

Two factor analyses were conducted in this study. The first analysis included all

items that measure the independent variables (System Quality, Information Quality, System

Usage, and User Satisfaction) while the second analysis included items that measure the two

dependent variables (Individual Impact and Organizational Impact). An iterative approach

was used to conduct factor analysis. Items that did not make the loading cutoff and/or items

that loaded on more than one factor were dropped from the analysis. The remaining items

were than resubmitted into another round of factor analysis. This process continued until the

researcher reached a meaningful factor structure.

Factor Analysis - Independent Variables (IQ, SO, US, SU)

In total, 40 items are used to measure the four independent variables (SQ, IQ, US,

SU). These items were entered into the principal component factor analysis with varimax

rotation.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 103: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

87

Table 6 shows the eigenvalue of each factor of the four factors that were extracted.

Table 6

Eigenvalue of Factors

Eigenvalue % Variance % Cumul. Variance

Factor 1 11.50 28.75% 28.75%

Factor 2 3.84 9.59% 38.34%

Factor 3 2.63 6.58% 44.92%

Factor 4 2.15 5.38% 50.30%

Eigenvalue refers to the amount of variance that a factor can account for. It is clear

from Table 6 that all four factors have eigenvalues greater than 1.0, which is the cutoff

adopted in this study. Factor 1 has the largest eigenvalue (11.50) and explains 28.75 % of

the variance. In total, the four factors explain 50.30 % of the variance.

Table 7 shows the factor loading after using varimax rotation. The following is a

discussion of the loading findings.

The System Quality scale

This scale consists of seven items that were borrowed from Bailey and Person

(1983). The scale asks the user about the time lapse between the request for data and the

response to that request, the ease and difficulty of using the system, ease and difficulty of

the sentences and words used in the system, balance between cost and benefits, trust in the

system, flexibility of the system, and connectivity of the system. Six items (SQ1, SQ3, SQ4,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 104: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

88

SQ5, SQ6, SQ7) have loaded highly on Factor 1, with loading ranging from .72 to .87. Item

SQ2 did not make the cutoff, so it was dropped from further analysis.

Information Quality Scale

This scale consists of nine items borrowed from Bailey and Person (1983). The scale asks

the use about information correctness, information availability, output variability,

information consistency, age of information, information comprehensiveness, display of

output, amount of information, and degree of congruence between what the users need and

output. Six items (IQ1, IQ2, IQ4, IQ5, IQ7, IQ9) loaded highly on Factor 1, with loadings

ranging from .60 to .80. Items IQ3, IQ6, and IQ8 did not make the cutoff, so they were

dropped from further analysis.

The System Usage Scale

This scale consists of 20 items that were borrowed from Igbaria, Pavri, and Huff

(1989). The scale measures usage through actual daily use of the computer, frequency of

use, the number of packages used by the participants, and the number of tasks the system is

used for. The 20 items on this scale did not load on a single factor; rather, they loaded on all

four factors. Item SU2 loaded on Factor 1 with loading of .60. Items SU6, SU7, SU8, SU9,

and SU10 loaded highly on Factor 2, with loadings ranging from .78 to .85. Items SU12,

SU16, and SU17 loaded highly on Factor 3, with loading ranging from .60 to .68. Items

SU3, SU4, SU5 loaded on Factor 4 with loadings ranging from .68 to .75. Items SU1, SU2,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 105: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

89

Table 7

Factors of Independent Variables: Rotated Factor Matrix

Items 1 2 3 4

SQ1 Time between request and the fulfillment of request 0.87 0.10 0.01 0.01

SQ3 Sentences and words used to interact with system 0.85 0.13 0.00 -0.02

SQ4 Balance between cost and benefit 0.82 0.04 -0.09 -0.08

SQ5 Trust in the system output 0.85 0.08 0.01 0.02

SQ6 System ability to change 0.72 0.05 0.06 0.02

SQ7 System ability to connect to other organizations 0.77 0.07 -0.1 -0.08

IQ I Information correctness 0.67 -0.03 0.11 -0.12

IQ2 Time information available compared to time needed 0.70 -0.07 0.16 0.03

IQ4 Information consistency 0.72 0.00 0.13 -0.11

IQ5 Age of the information 0.80 -0.07 0.09 0.10

IQ7 Martial design of the display of the output 0.61 -0.09 0.08 0.14

IQ9 Degree of congruence between what the user wants and the output 0.65 0.05 -0.09 -0.04

US1 How adequately does system meet information needs 0.80 0.01 0.09 0.00

US2 How efficient is the system? 0.84 0.03 0.07 -0.02

US3 How effective is the system? 0.87 0.03 0.08 -0.07

US4 General satisfaction with the information system 0.85 0.04 0.02 0.00

SU3 Extent of use in Historical References task 0.07 0.16 -0.02 0.68

SU4 Extent of use in Looking for trends task 0.01 0.35 0.00 0.75

SU5 Extent of use in finding problems and alternatives task -0.03 0.28 0.01 0.75

SU6 Extent of use in Planning 0.09 0.79 0.16 0.19

SU7 Extent of use in Budgeting -0.02 0.83 0.07 0.10

SU8 Extent of use in communication 0.04 0.81 0.20 0.03

SU9 Extent of use in controlling and guiding activities task 0.05 0.85 0.13 0.06

SU10 Extent of use in decision making task -0.01 0.83 0.10 0.13

SU12 Package used in the job (Word Processing) 0.03 0.07 0.60 -0.11

SU16 Package used in the job (Graphics) 0.13 0.09 0.68 0.05

SU17 Package used in the job (communication) -0.06 0.09 0.61 -0.07

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 106: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

90

SU11, SU13, SU14, SU15, SU18, SU19, and SU20 did not make the cutoff, so they were

dropped from further analysis.

The User Satisfaction Scale

This scale consists of four items that were borrowed from Seddon and Yip (1992).

The scale measures system adequacy, system efficiency, system effectiveness, and general

satisfaction with the system. All four items loaded highly on Factor 1 with loadings ranging

from .80 to .87.

Table 8 summarizes the items eliminated from further analysis because they did not

make the cutoff. Table 9 summarizes the findings of principal component factor analysis on

the independent variables after dropping the items that did not make the cutoff.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 107: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

91

Table 8

Summary of Items Eliminated from Further Analysis

SQ2 The ease and difficulty of using the system potential

IQ3 Variability between output information and what should be gotten

IQ8 Amount of information

IQ6 Information comprehensiveness

SU1 Time spent (in hours) using the system during working hours

SU2 Average use of the information system

sun Package used in the job (Spreadsheet)

SU13 Package used in the job (Data management)

SU14 Package used in the job (Modeling System)

SU15 Package used in the job (Statistical system)

SU18 Package used in the job (Programming)

SU19 Package used in the job (4GL)

SU20 Package used in the job (other packages)

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 108: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

92

Table 9

Summary of Item Loadings

Items 1 2 3 4

SQI Time between request and fulfillment of request 0.87 0.10 0.01 0.01

SQ3 Sentences and words used to interact with system 0.8S 0.13 0.00 -0.02

SQ4 Balance between cost and benefit 0.82 0.04 -0.09 -0.08

SQ5 Trust in the system output 0.85 0.08 0.01 0.02

SQ6 System ability to change 0.72 0.05 0.06 0.02

SQ7 System ability to connect to other organizations 0.77 0.07 -0.1 -0.08

IQ I Information correctness 0.67 -0.03 0.11 -0.12

IQ2 Time information available compared to time needed 0.70 -0.07 0.16 0.03

IQ4 Information consistency 0.72 0.00 0.13 -0.11

IQ5 Age of the information 0.80 -0.07 0.09 0.10

IQ7 Martial design of the display of the output 0.61 -0.09 0.08 0.14

IQ9 Degree of congruence between what user wants and output 0.65 0.05 -0.09 -0.04

USI How well does the system meet the information needs 0.80 0.01 0.09 0.00

US2 How efficient is the system? 0.84 0.03 0.07 -0.02

US3 How effective is the system? 0.87 0.03 0.08 -0.07

US4 General satisfaction with the information system 0.85 0.04 0.02 0.00

SU6 Extent of use in Planning 0.09 0.79 0.16 0.19

SU7 Extent of use in Budgeting •0.02 0.83 0.07 0.10

SU8 Extent of use in communication 0.04 0.81 0.20 0.03

SU9 Extent of use in controlling and guiding activities task 0.05 0.85 0.13 0.06

SU10 Extent of use in decision making task -0.01 0.83 0.10 0.13

SU12 Package used in the job (Word Processing) 0.03 0.07 0.60 -0.11

SU16 Package used in the job (Graphic) 0.13 0.09 0.68 0.05

SU17 Package used in the job (Communication) -0.06 0.09 0.61 -0.07

SU3 Extent of use in historical references task 0.07 0.16 -0.02 0.68

SU4 Extent of use in looking for trends task 0.01 0.35 0.00 0.75

SU5 Extent of use in finding problems and alternatives task -0.03 0.28 0.01 0.75

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 109: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

93

These loadings posed two dilemmas for the researcher. Should he combine all items

that loaded highly on Factor 1 into one variable? Should he deconstruct the usage variable

to three constructs according to the items loadings? If yes, does this coincide with the

literature, or did other writers deconstruct the usage variable to three constructs?

The loading of items that measure Information Quality, System Quality, and User

Satisfaction on one factor is not uncommon. Several writers have reached the same results

using factor analysis and examining two or more of these variables. For example, Ishman

(1996) found that item that measured User Satisfaction loaded with the composite measure

that was used to measure Information and System Quality. Ishman said, “it might be

concluded from this result that this single-item [User Satisfaction] is measuring the same

dimension of information success as the eight items it loads with” (p. 25). McHaney et al.

(1999) tested the reliability of the end user computing satisfaction measure (EUCS). This

scale is a composite of several items that measure Information Quality and System Quality

(e.g., items: SQ1, SQ3, IQ9, SQ5, IQ1, IQ2, IQ5, IQ7, IQ9). Using factor analysis, the

writers found that all items loaded on one factor, with loading values ranging from .76 to

.94. Glorfeld (1994) combined System Quality, Information Quality, and Satisfaction into

one variable, which he called Satisfaction (Figure 3). This was done after conducting a

factor analysis where all items that measure the three variables loaded on one factor.

Accordingly, supported by the statistical evidence found in this study through the use

of principal component factor analysis with varimax rotation and the conceptual evidence

found through the work of other researchers on the same variables, the researcher decided to

combine all items that loaded highly on Factor 1 into one variable. The only exception was

Item SU1 because this item measures the average use of the information system; thus, the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 110: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

94

conceptual base of this item does not coincide with the rest of the items. The new variable

was called Satisfaction (as it was called by Glorfeld) to contribute to building unified

concepts in the information system and public management information system fields.

Regarding System Usage, several researchers have dealt with System Usage as a

multi-dimensional concept (Igbaria, 1992; Igbaria et al., 1989; Kim & Lee, 1986). The

dimensions that these researchers identified included actual daily use, frequency of use,

number of packages used, level of sophistication of usage, and inclusion of computer

analysis in decision-making usage as measured by the number o f tasks the system is used in.

None of these studies, however, used factor analysis to analyze the inter-correlations of the

items that were included under each dimension.

The statistical evidence in this study indicated that usage is not a unitary construct

and could be deconstructed into three constructs. These three constructs correspond with

several dimensions identified by other researchers. For example, SU6 , SU7, SU8 , SU9, and

SU10 correspond to the inclusion of computer analysis in the decision-making dimension;

SU12, SU16, and SU17 correspond to the number of packages used; and SU3, SU4, SU5

could be considered as a subset of the inclusion of computer analysis in decision-making

dimension.

Because this is the first study to evaluate information systems in the public sector

that attempts to develop a comprehensive model - and in order to avoid further

complicating the investigation and analysis of the study’s model - the researcher chose to

consolidate all usage items that loaded on Factors 2,3, and 4 into one variable called System

Usage. Deconstructing the System Usage variable, and how this could affect the

relationships with other variables in the model, will be left to future research.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 111: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Factor Analysis- Dependent Variables (IM, 01)

95

This subsection presents the findings of the factor analysis conducted on the two

dependent variables Individual Impact and Organizational Impact. In total, 18 items were

used to measure the two dependent variables. These 18 items were entered into the principal

component factor analysis with varimax rotation. Table 10 shows the eigenvalue of each

factor of the two factors that were extracted.

Table 10

Eigenvalue of Factors

Eigenvalue % Variance % Cumul. Variance

Factor 1 6.27 36.90% 36.90%

Factor 2 5.02 29.56% 66.46%

Table 10 indicates that both factors have eigenvalues greater than 1.0, which is the

cutoff point for this study. Factor 1 has the largest eigenvalue (6.27) and explains 36.90 %

of the variance. In total, the two factors explain 66.46 % of the variance.

Table 11 shows the factor loading after using varimax rotation. The following is a

discussion of the loading findings.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 112: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

96

The Individual Impact Scale

This scale consists of ten items that were borrowed from Doll and Torkzadeh (1998).

The instrument measures the impact of information system on four works aspects (task

productivity, task innovation, customer satisfaction, and management control). All ten items

loaded highly on factor one with loadings ranging from 0.69 to 0.79. These loadings exactly

coincide with the conceptual grouping provided in Chapter 3.

The Organizational Impact Scale

This scale consists of eight items. Five items were borrowed from Sabherwal (1999)

and three items were borrowed from Mahmood and Soon (1991). Seven items of this scale

have loaded on one factor with loadings ranging from 0.63 to 0.79. Item 012 that measures

the impact of information system on reducing administrative costs did not make the cutoff.

Thus, it was eliminated from the analysis at the second round of factor analysis.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 113: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

97

Table 11

Factors of Dependent Variables: Rotated Factor Matrix

Items 1 2

IM1 Accomplished more work using the information system 0.69 0.39

IM2 Information system leads to increased productivity 0.77 0.34

IM3 Information system saves time 0.70 0.36

1M4 Information system helps apply new methods to do the job 0.77 0.16

1M5 Information system helps meet customer needs 0.74 0.44

1M6 Information system led to increased customer satisfaction 0.72 0.50

1M7 Information system led to improved customer service 0.73 0.51

IM8 Information system helps management control the work process 0.75 0.40

IM9 Information system improves management control 0.79 0.28

IM10 Information system helps management control performance 0.75 0.39

Oil Distinguishes the organization from other organizations 0.30 0.66

013 Improves the efficiency of internal operations 0.24 0.78

014 Enhances organizational reputation 0.36 0.64

015 Enhances communication with other organizations 0.32 0.79

016 Enhances and improved coordination with other organizations 0.45 0.70

017 Improves decision making 0.26 0.63

018 Overall, makes the organization successful 0.35 0.74

The Implications of Results of Factor Analysis on the Study’s Model

This section presents the implications of the results o f factor analysis on the study’s

model in terms of modifying the relationships in the model, modifying the study’s research

question, and the study’s hypothesis. Figure 7 depicts the 4-factor model that was produced

by factor analysis.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 114: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

98

Q.

• O c d

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Figu

re

7. St

udy

mod

el a

fter

fact

or

anal

ysis

Page 115: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

99

Modifying the 3.B Equations

Based on the outcome o f the factor analysis, the conceptual groupings of all

variables in the study have changed. Thus, it is expected that associations among the study’s

variables have also changed. The equations in Chapter 3 that presented the relationships in

DeLone and McLean’s models have been modified to reflect the new information as a result

of the factor analysis (Figure 7). The modified equations are:

01 = / (SU, STIS, II) (4.A.1)

II = / (SU, STIS) (4.A.2)

SU=/ (STIS) (4.A.3)

STIS= / (SU) (4.A.4)

where STIS, SU, II, and 01 represent Satisfaction, System Use, Individual Impact, and

Organizational Impact.

Equation 4.A.1 suggests that Organizational Impact is determined directly by

Individual Impact and indirectly by the rest of the variables in the model through affecting

Individual Impact. Equation 4.A.2 suggests that Individual Impact is determined directly by

Satisfaction and System Use. Equation 4.A.3 suggests that System Usage is determined

directly by Satisfaction. Equation 4.A.4 suggests that Satisfaction determined directly by

System Usage.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 116: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Modified Research Question and Hypothesis

too

The research question and hypothesis is expected to change to reflect the change in

the relationships in the model and subsequent changes in the equations that represent these

relationships. Consequently, the study’s research question has been modified to: To what

extent is the modified (4-factor) model useful in evaluating information systems in the public

sector?

The following null hypothesis will be tested: The relationships that are indicated in

the 4.A equations do not exist.

Scales Reliabilities

As a result of the factor analysis, most of the measures used in this study have been

modified. Consequently, the reliabilities of these measures have to be determined again.

The reliability analyses for these measures are contained in Tables 12 to 15.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 117: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

101

Table 12

Scale Reliability of the Satisfaction Variable_________________Items

1.Time between request and the fulfillment of request

2. Sentences and words used to interact with the system3. Balance between cost and benefit4. Trust in the system output S.System ability to change6. System ability to connect to other organizations7. Information correctness8. Time information available comparing to time it is needed9. Information consistency10. Age of the information11. Martial design of the display of the output12. Degree of congruence between what the user wants and the output13. How adequately does the system meets the information needs14. How efficient is the system?15. How effective is the system?16. General satisfaction with the information system

Cronbach’s Alpha for Satisfactions 95 (N= 287)

CorrectedItem- Alpha

Total if ItemCorrelation Deleted

1 .84 .952 .82 .953 .77 .954 .81 .955 .67 .956 .72 .957 .65 .958 .68 .959 .72 .9510 .78 .9511 .56 .9512 .59 .9613 .78 .9514 .82 .95

15 .86 .9516 .84 .95

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 118: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

102

Table 13

Scale Reliability of the System Usage Variable

items1. Extend ot use in Historical References task2. Extend of use in Looking for trends task3. Extend of use in finding problems and alternatives task4. Extend of use in Planning5. Extend of use in Budgeting6. Extend of use in communication7. Extend of use in controlling and guiding activities task8. Extend of use in decision making task9. Package used in the job (Word Peocessing)10. Package used in the job (Graphic)11. Package used in the job (communication)

Cronbach's Alpha for System Usages .83 (N= 287)

CorrectedItem- Alpha

Total if ItemCorrelation Deleted

1 .35 .832 .54 .823 .47 .824 .71 .805 .68 .806 .67 .807 .71 .808 .71 .809 .11 .8410 .22 .8411 .16 .84

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 119: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

103

Table 14

Scale Reliability of the Individual Impact Variable

items1. Acomplish more work using the information system2. Information system lead to increasing prodcutivity3. Information system save time4. Information system helps in applying new methods to do the job5. Information system helps in meeting customer needs6. Information system led to increasing customer satisfaction7. Information system led to improving customer service8. Information system helps management control the work process9. Information System improves management control10. Information System helps management control performance

Cronbach's Alpha for Individual lmpact= .95 (N= 287)

CorrectedItem- Alpha

Total if ItemCorrelation Deleted

1 .76 .952 .79 .953 .74 .954 .67 .955 .84 .946 .85 .947 .87 .948 .81 .949 .77 .9510 .79 .95

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 120: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

104

Table 15

Scale Reliability of the Organizational Impact Variable

items1. Distinguish the organization from other organizations2. Improve the efficiency of internal operations3. Enhancing organizational reputation4. Enhancing communication with other organizations5. Enhancing and improving corrdination with other organizations6. Improving decision making7. Overall, making the organization successful

Cronbach's Alpha for Organizational lmpact= .89 (N= 287)

CorrectedItem- Alpha

Total if ItemCorrelation Deleted

1 .63 .882 .72 .873 .64 .884 .76 .865 .75 .876 .60 .887 .74 .87

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 121: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

105

The results of the reliability analysis show that the four variables have alpha

coefficients higher than the minimum acceptable alpha value (0.70) used in this study, and

the internal consistency reliability of the measurement instruments that were not modified

greatly are within the range of studies reviewed in Chapter 2.

After finishing the factor and reliability analyses, the COMPUTE procedure in SPSS

was used to create the composite variables. The composite variables were created by

averaging the items that loaded highly on each factor.

Second Round of Correlation Analysis

As a result of the factor analysis and the resulting change in the theoretical grouping

of the variables, a second round of correlation analysis was conducted. Table 16 shows the

results of the correlation analysis.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 122: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

106

Table 16

Pearson Correlation Matrix Of the Four variables in the Study

1 2 3 4

1. Satisfaction 1.00 .07 OO * # .76**

2. System usage 1.00 .06 .05

3. Individual impact 1.00 .78**

4. Organizational impact 1 .00

N=287*.p< 0.01

The Pearson correlation coefficients in Table 16 show that there are strong direct

associations among some of the variables in the study and there are no associations among

other variables. There is a strong direct association between Satisfaction and Individual

Impact (r = 0.78), between Satisfaction and Organizational Impact (r = 0.76), and between

Individual Impact and Organizational Impact (r = 0.78). However, the correlation analysis

indicated that usage is not associated (directly or inversely) with any variable. All

correlation coefficients are statistically significant at the.01 level.

Regression Analysis

This section presents the fourth step in the data analysis plan that is the result of the

regression analyses that were conducted to examine the strength of the relationships among

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 123: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

107

the dependent variables and independent variables in the study. Only those independent

variables found to be statistically significant in the second round of correlation analysis were

entered into the regression analyses. Consequently, three regression analyses were

conducted.

In the first regression analysis, Individual Impact was the dependent variable.

According to Equation 4.A.2 [IM = f (SU, STIS)], Individual Impact was hypothesized to

have direct relationships with Satisfaction and System Usage. However, the correlation

analysis findings indicated that the satisfaction variable was the only variable found to have

a statistically significant direct association with Individual Impact. As such, the satisfaction

variable was the only variable entered in the first regression analysis as the independent

variable.

In the second regression analysis, Organizational Impact was the dependent variable.

According to Equation 4.A.1 [01 = f (SU, STIS, II)], Individual Impact was hypothesized to

have direct positive relationship with Organizational Impact and indirect positive

relationships with satisfaction and usage. Due to the fact that the indirect relationship could

not be assessed or obtained through regression analysis, the second regression was

conducted to test for direct positive relationship between Individual Impact and

Organizational Impact. Therefore, Individual Impact was entered in the second analysis as

the independent variable.

Examining the Pearson correlation matrix in Table 16, the researcher found positive

association between Satisfaction and Organizational Impact (r = 0.76). The high correlation

coefficient of this association raises a question of whether there is a direct positive

relationship between Satisfaction and Organizational Impact. Looking back through the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 124: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

108

literature to answer this question, the researcher found that other researchers did not

investigate this relationship. Thus, to explore for this relationship, a third multiple

regression analysis was conducted to assess whether there is a direct relationship between

Satisfaction and Organizational Impact. In this analysis, the Organizational Impact variable

was the dependent variable. The Satisfaction variable and the Individual Impact variable

were the independent variables.

Equations 4.A.3 [SU = f (STIS)] and 4.A.4 [STIS = f (SU)] were not tested in the

regression analysis because the correlation analysis findings indicated that the association

between Satisfaction and System Usage was not statistically significant. In fact, System

Usage did not correlate with any variables in the model. Thus, it was dropped for the

regression analysis.

This section is divided into four subsections. The first three subsections present the

findings of the three regression analyses. SPSS for window, release 10, was used to conduct

the three regression analyses. The fourth subsection summarizes the changes that took place

in the study’s model as a result for the regression analysis findings.

First Regression Analysis: Regressing Individual Impact on Satisfaction

In the first regression analysis, Individual Impact was the dependent variable while

satisfaction was the independent variable. Table 17 shows the results of the first regression

analysis

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 125: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

109

Table 17

Regression Analysis for Variable Predicting Individual Impact (N = 287)

Variable B SEB P tSatisfaction .67 .032 .78 2 1 .2 0 *

R ^.61 R2 (Adjust) = .61 F=449.47*

*P<.01

Satisfaction was found to be a significant predictor of Individual Impact (p <. 01).

The variable accounted for 61% of the variation in Individual Impact. The calculated F of

449.47 was significant at an alpha < 0.01. The positive beta of 0.78 indicated that

Satisfaction had significant positive effect on Individual Impact. This indicates that there is

significant statistical evidence for the positive relationship between satisfaction and

Individual Impact.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 126: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

n o

Second Regression Analysis: Regressing Organizational Impact on Individual Impact.

In the second regression analysis, Organizational Impact was the dependent variable

while Individual Impact was the independent variable. Table 18 shows the results of the

second regression analysis.

Table 18

Regression Analysis for Variable Predicting Organizational Impact (N= 287)

Variable B SEB P t

Individual impact .96 .04 .78 20.94*

R ^ . 60 R2 (Adjust) = .60 F = 438.43*

*P<.01

Individual impact was found to be a significant predictor of Organizational Impact.

The variable accounted for 60% of the variation in Organizational Impact. The calculated F

o f438.43 was significant at an alpha level <0.01. The positive beta of 0.78 indicated that

Individual Impact had significant positive effect on Organizational Impact. This indicates

that there is significant statistical evidence for the positive relationship between Individual

Impact and Organizational Impact.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 127: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

I l l

Third Regression Analysis: Regressing Organizational Impact on Satisfaction and Individual

Impact

In the third regression analysis, Organizational Impact was the dependent variable

while Individual Impact and Satisfaction were the independent variables. Table 19 shows

the results of the third regression analysis.

Table 19

Regression Analysis for Variables Predicating Organizational Impact (N= 287)

Variable B SEB P t

Satisfaction .42 .06 .40 7.272*

Individual impact .58 .078 .47 8.493*

R2=. 67 R2 (Adjust) =. 67 F =285.583*

*P<.01

Satisfaction and Individual Impact were found to be significant predictors o f

Organizational Impact. The two variables accounted for 67% of the variation in

Organizational Impact. The calculated F of 285.583 was significant at an alpha level of <

0.01. The standardized beta values for the two variables indicate that the two variables have

positive relationships with Organizational Impact. Furthermore, examining the standardized

beta values for the two variables show that Individual Impact has a stronger effect on

Organizational Impact than Satisfaction. The beta coefficient for Individual Impact was .47

while the beta coefficient for satisfaction was 0.40. Both beta coefficients are significant

(P < -01).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 128: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

112

Because the two independent variables had high coefficient correlation (r = 0.78), the

multicollinearity problem was tested for using specific technique called Variance Inflation

Factor (VIF). This measure is used to show the degree to which an independent variable is

explained by other independent variables in the regression equation (Hair et al., 1992). In

the literature, a cutoff of 10 for VIF is usually used to indicate whether the multicollinearity

problem exists or not (Hair et al.). This study uses this cutoff (10) for VIF. The VIF value

for the two variables is 2.57. This indicates that there is no multicollinearity problem in the

third regression analysis. Thus, statistical results from this regression analysis do not

include biases due to multicollinearity.

The Implications of Results of the Regression Analysis on the Study’s Model

In summary, after completing the regression analysis, all relationships tested were

found to be statistically significant. Figure 8 depicts the study’s model after completing the

regression analysis. Figure 8 shows that the relationship between Satisfaction and

Individual Impact is positive and significant, with a beta coefficient of 0.78; and the

relationship between Individual Impact and Organizational Impact is also positive and

significant, with a beta coefficient of 0.78 after controlling for the effect of satisfaction and a

beta coefficient of 0.47 without controlling for the effect of satisfaction. Furthermore, this

study found a direct positive relationship between satisfaction and Organizational Impact,

with a beta coefficient of 0.40.

The model in Figure 8 was further tested using path analysis.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 129: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

113

CO

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Figu

re

8. St

udy

Mod

el a

fter

Reg

ress

ion

Ana

lysi

s

Page 130: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Path Analysis

114

This section presents the fifth step in the data analysis plan: path analysis. There are

several advantages of path analysis over regression analysis. First, it enables the researcher

to confirm the model produced by regression analysis. Pedhazur (1997) argued that path

analysis is intended “to shed light on the tenability of the causal models a researcher

formulates” (p. 770). Second, path analysis enables the researchers to determine the indirect

effects of the variables in his model. In other words, path analysis allows for decomposing

the effects of variables into direct effects, indirect effects, and total effects. Thus, the

researcher will be able to determine the actual impacts of the variables in his model.

Third, conducting path analysis through the use of computer software (e.g., AMOS,

LISREL) will allow the researcher to determine the goodness o f fit of his model; that is how

the model fits the data collected. They’re several measures for the degree of fit (e.g.,

Goodness of Fit Index, Adjusted Goodness of Fit Index, Root Mean Square Residual).

Because of the above advantages, path analysis is used in this study. The following

subsection discusses the results of path analysis.

Findings of Path Analysis

This subsection presents the results of the path analysis conducted on the study’s model

This included resulting path coefficients and their significant, effects of each variable in the

model (direct, indirect, and total), and measures of the goodness of fit of the study’s model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 131: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

115

The study model resulting from the regression analysis was the starting point for the path

analysis. Figure 8 depicts the model resulting from the regression analysis.

Using AMOS, this model was entered into the path analysis. Figure 9 depicts the

model produced by the path analysis including the path coefficients. Table 20 summarizes

the path coefficients for each path in the model produced. Path coefficients could be

reported as standardized or non-standardized (Norris, 1997). Several researchers, however,

recommended using standardized coefficients if the intention is to compare the magnitude of

each path in the model (Norris, 1997; Asher, 1976; Retherford & Choe, 1993; Loehlin,

1992). Thus, this study reports the path analysis results as standardized coefficients.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 132: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Figu

re

9. St

udy

Mod

el a

fter

Path

A

naly

sis

Page 133: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

117

Table 20

Summary of Standardized Path Coefficients of Paths in the Model Produced by the Path Analysis

PathPath coefficient (Standardized) SE P-value

Satis p IM .78 .03 .000

IM — * o i A l .06 .000

Satis — ►01 .40 .07 .000

Satis = satisfaction IM = Individual Impact 01 = Organizational Impact

Table 20 shows that all path coefficients are significant. The strongest direct effect on

Organizational Impact comes from Individual Impact, with path coefficient 0.47.

Satisfaction affects Organizational Impact with path coefficient 0.40. The path from

satisfaction to Individual Impact has a coefficient of 0.78.

The complete magnitude of the effects of these variables, however, can only be

assessed through knowing the indirect effects of these variables. As part of the final output

of path analysis, AMOS release 4.01 provides the direct, indirect, and total effects of each

variable in the tested model. Table 21 summarizes the effects for each variable in the

model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 134: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

118

Table 21

Summary of Direct. Indirect. Total Effects of Research Model Variables

Effect Direct Indirect TotalSatisfaction on Individual Impact .78 ............. .78

Individual Impact on Organizational Impact .47 ---------- .47

Satisfaction on Organizational Impact .40 .37 .76

The results in Table 21 shed more light on the magnitude of the effect of some of the

variables in the model. Satisfaction has a path coefficient equal to 0.40 with Organizational

Impact. However, talcing into consideration the effect o f Satisfaction on Individual Impact,

which in turn affects Organizational Impact, the total impact of Satisfaction including the

direct and indirect effect is 0.76. Therefore, by examining the total effect of the different

variables, Individual Impact no longer continued to be the most important factor affecting

Organizational Impact (path coefficient = 0.47). Rather, Satisfaction became the most

important factor affecting Organizational Impact (path coefficient = 0.76). Consequently,

Satisfaction became the most important factor affecting both Individual Impact (path

coefficient = 0.78) and Organization Impact. Other variables in the model continue to have

the same path coefficients.

One of the advantages of conducting path analysis using computer statistical

packages, such as AMOS, is that the significance of the model as a whole could be assessed

through the measures of fit which assess the goodness of fit of a model with the data

collected. These measures are provided by these statistical packages as part of the final

output o f path analysis. This study used several measures of fit. These measures include

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 135: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

119

Goodness of Fit Index (GFI), Root Mean Square Residual (RMR), Incremental Fit Index

(IFI), and Comparative Fit Index (CFI). Using multiple measures of goodness of fit

increases confidence in the fit of the study’s model. Table 22 shows the results of the four

measures of fit used in this study to assess the goodness of fit of the model produced by the

path analysis.

Table 22

Measures of Goodness of Fit for the Model Produced by the Path Analysis

Measure of fit Recommended degree of goodness of fit Study’s model degree of fit

GFI 1 or closer to 1 1

IFI 1 or closer to 1 1

CFI 1 or closer to 1 1

RMR 0 or closer to 0 0

Column 2 in Table 22 provides the cutoff of the degree of fit for each measure of fit

while column three provides the degrees of fit of the model produced by the path analysis

using each measure. Based on the results of the measures of fit for the model produced by

the path analysis, the researcher concluded that the model produced by the path analysis is

significant because the model met all the cutoffs of the measures of fit used in this study.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 136: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

120

Comparison of Regression Analysis and Path Analysis Findings

This section presents a comparison between the statistics produced by the regression

analysis and path analysis. The main goal of this comparison is to determine whether the

two different statistical methods produced the same outcomes. If the two statistical methods

show the same statistics, then we can conclude that the model produced by the regression

analysis is statistically significant and vice versa.

The comparison covered two areas. First, the significant and non-significant

relationships among the variables in the study’s model; second, the actual magnitude of the

effects of the variables in the study’s model. The later covered the direct, indirect, and total

effects found using the two statistical methods.

Table 23 summarizes the results of the testing the relationships among the variables

in the study’s model using regression analysis and path analysis. Statistics in Table 23

clearly indicate that the two statistical methods produced the same significant relationships.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 137: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

121

Table 23

Summary of Relationships Found among the Variables in the Study’s Model Using

Regression Analysis and Path analysis

Relationship Path analysis Regression analysis

Pathcoefficient

(standardized) Significance

RegressionCoefficient

(beta) SignificanceSatis ^IM .78 Yes .78 Yes

IM — ►<>! .47 Yes .47 Yes

Satis — ►01 .40 Yes .40 Yes

Satis= satisfaction IM=Individual Impact OI=Organizational Impact

Table 24 summarizes the different types o f effect for each variable in the study’s model,

which were determined by the use of regression analysis and path analysis. In the regression

analysis, the total effect of Satisfaction on Organizational Impact was calculated through the

following equation:

Total effect B31 + (B21 * B32)direct effect indirect effect

where 031 = the standardized regression coefficient of the relationship between

Organizational Impact (3) and Satisfaction (1); 021 = the standardized regression coefficient

of the relationship between Individual Impact (2) and Satisfaction (1); and 032 = the

standardized regression coefficient of the relationship between Organizational Impact (3)

and Satisfaction (1). This equation was adopted from Asher (1976).

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 138: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

122

The result of this equation is: = .399 + (.782M66)

= .399 + .365

= .764

Table 24

Summary of Direct, Indirect, Total Effects of Research Model Variables Determined by Regression Analysis and Path Analysis

Effect Direct Indirect TotalRegression

analysisPath

analysisRegression

analysisPath

analysisRegression

analysisPath

analysisSatisfaction on Individual Impact .78 .78 .78 .78

Individual Impact on Organizational Impact .47 .47 .47 .47

Satisfaction on Organizational Impact .40 .40 .37 .37 .76 .76

The statistics in Table 24 clearly indicate that using both regression analysis and path

analysis led to reaching the same coefficients for the different types of effects. Thus, the

same variable that was important in the model produced by the regression analysis was also

important in the model produced by the path analysis. Satisfaction in both analyses was the

important variable.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 139: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

123

Based on the preceding comparison, the researcher concluded that the model

produced by the regression analysis is significant. Because, first, when the model was

entered into path analysis, the same statistics that were produced by the regression analysis

were produced by the path analysis. Thus, this result confirms the results of the regression

analysis. Second, as part of the path analysis output, measures o f goodness of fit provided

further evidence that the model was significant. The study’s model met all cutoffs of the

measures of fit.

Figures 8 and 9 depict the study’s model that was tested and validated. In this

model, the relationship between Satisfaction and Individual Impact is positive and

significant, with a beta coefficient of 0.78; the relationship between Individual Impact and

Organizational Impact is also positive and significant, with a beta coefficient of 0.47 after

controlling for the effect of Satisfaction and beta coefficient of 0.78 without controlling for

the effect of Satisfaction; and the relationship between Satisfaction and Organizational

Impact is positive and significant, with a beta coefficient of direct effect of 0.40 and a beta

coefficient of indirect effect of 0.37.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 140: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Chapter 5

124

SUMMARY, CONCLUSIONS, AND RECOMMENDATIONS

This chapter presents the conclusions of this study. It is divided into three sections.

The first section presents an overview of the purpose of the study, and summarizes and

interprets the results. The second section discusses the implications and contributions of this

study to public administration theory and management. The final section discusses possible

avenues of future research based on the results of this study.

Summary of the Findings

As public organizations in Kuwait enter the new information age, their investment in

and usage of information systems are expected to be substantial. In order to obtain the

resources needed to invest in these systems, public organizations need to justify the expense

by explaining the projected outcome and providing clear evidence that these systems will

increase the efficiency and effectiveness of these organizations in delivering services to the

public. This opportunity requires an examination of management information systems in

public organizations. A more specific question that requires exploration is how to evaluate

the success of information systems in public organizations. That is where this study comes

into play.

By focusing on how to evaluate information systems in public organizations, this

study has helped fill a gap in the research literature. This research conducted a

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 141: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

125

comprehensive review of prior conceptual and empirical studies focusing on evaluating

information systems. This involved reviewing two bodies of literature: 1) the management

information systems literature on private organizations and 2) the emerging public

management information systems literature on public organizations.

The most prominent opportunity revealed in the review of the information systems

literature was to identify appropriate measurements and models for evaluating information

systems in public organizations. The review of the management information systems

literature on private organizations exposed several empirically tested measures of

information systems components, as well as a limited number of studies providing

theoretical models for evaluating information systems, but virtually nothing involving the

public sector.

Of the theoretical models, DeLone and McLean’s (1992) model (Figure 1) emerged

as the most comprehensive. Interestingly, this model received the most support from

subsequent empirical studies. Consequently, the current study used this model as the

conceptual foundation for this research.

Based upon several studies in the public management information systems literature,

management information systems literature, and open system literature, this study

conceptualized the DeLone and McLean model in three frames. The outer frame is called

the external environment frame, the middle frame is called the task environment frame, and

the inner frame is called the organizational boundary frame (Figure 5). The DeLone and

McLean model proposed that there are six variables (System Quality, Information Quality,

System Usage, User Satisfaction, Individual Impact, and Organizational Impact) that

measure the success of information within the boundary of an organization and does not

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 142: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

126

include any external actors in the evaluation process. A seventh variable, External

Environment Satisfaction, was added to the DeLone and McLean model to denotes the

satisfaction of external actors.

The study research question was: to what extent is DeLone and McLean’s model

useful in evaluating information systems in the public sector?. Several equations that

represent the relationships among the study’s model were formulated as the hypotheses for

this study. Six Kuwaiti public organizations were randomly selected as the study’s sample.

A survey methodology was chosen to collect data. A total of 363 usable questionnaires

were obtained. Factor analysis, correlation analysis, regression analysis, and path analysis

were used to analysis the study’s model.

Initial findings of this study did not support the DeLone and McLean model as it was

originally proposed. Factor analysis o f the 40 items questionnaire that measure Information

Quality, System Quality, System Usage, and User Satisfaction resulted in a two-factor

solution. The first factor was labeled Satisfaction. Items included under this factor were

most of the items measuring Information Quality, System Quality, and User Satisfaction.

The second factor was labeled Usage. Items included under this factor were items that

measure System Usage. Thus, the study findings led to respecifying the study’s model.

Under the revised model, DeLone and McLean’s six variables model became four as

indicated in Figure 7. This revised model proposes that Satisfaction and System Usage

affect each other and Individual Impact. Individual Impact, in turn, affects Organizational

Impact. In other words, when users of information perceive that these systems are high

quality systems, produce high quality information, and usage of information systems

increases, the perceptions of these users that information systems are making them more

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 143: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

127

productive in terms of providing them with timely and needed information for their work

related responsibilities increase. This in turn increases the perception that information

systems enhance the effectiveness of the organization. Moreover, the study findings

indicated that when the usage of information systems increases, the perceptions that these

systems are high quality systems and produce high quality information increase and vice

versa. The respecification of the study's model led to modifying the research question and

hypotheses. The research question became: To what extent is the modified model (four

factor model) useful in evaluating information systems in the public sector?

Correlation analysis was first used to analyze the revised model. Findings indicated

that there were positive direct associations among the variables in the model except for the

System Usage variable. This variable did not relate directly to any of the other variables in

the model. The instrument used to measure System Usage could explain this result. This

instrument consists of twenty items that measures four dimensions of usage: (1) actual daily

use, (2) frequency of use, (3) total tasks, and (4) total applications. The numbers of items

that measure each dimension are one item (US1), one item (SU2), eight items (SU3-SU10),

and ten items (SU11-SU20), respectively. The use of factor analysis has led to eliminating

the two items that measure the actual daily use dimension and frequency of use dimension.

Thus, lack of measure for these two dimensions might be a possible cause for the lack of

associations with other variables in the study model. Especially, if we take into

consideration that during the pilot study and consultation process, several participants and

Kuwaiti professors have indicated that information systems have been recently introduced in

their organizations, not widely used in all work related tasks, and limited numbers of

software packages are used in their organizations. In other words, in the Kuwaiti public

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 144: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

128

organizations, there is a need to use a measures of information systems use that rely on the

actual daily use dimension and frequency of use dimension more than on the other

dimensions of usage.

An interesting and unexpected finding of the correlation analysis is the significant

direct association between Satisfaction and Organizational Impact. Previous research gave

no indication for such a relationship between the two variables. Because of this surprising

and potentially important result, the researcher decided to extend the analysis to include the

investigation of this relationship in the subsequent regression analysis and path analysis.

Consequently, the lack of associations with the System Usage variable led to

respecifying the four variables model (Figure 7) to become a three variable model (Figure

8). The revised model proposes that Satisfaction affects Individual Impact that, in turn,

affects Organizational Impact. Also, Satisfaction directly affects Organizational Impact.

In other words, when users of information perceive that these systems are high

quality systems and produce high quality information, the perceptions of these users that

information systems are making them more productive in terms of providing them with

timely and need information for their work related responsibilities increase. This in turn

leads to increasing the perception that information systems lead to enhancing the

effectiveness of the organization. Moreover, the study findings indicated that when the

perception of having high quality information systems that produce high quality information

increases, the perception that information systems lead to enhancing the effectiveness of the

organization increases.

The three-variable model was tested using regression analysis and path analysis.

Both analyses supported the above relationships. Path analysis’ findings indicated that the

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 145: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

129

model fit the data. Both analyses show that satisfaction had significant positive impact on

both Individual Impact and Organizational Impact. Thus, the results of this study provided

support for the three variables model of evaluating information system success (Figures 8,

9).

In summary, the three-variable model of evaluating information system has emerged

from the original six-variable model of DeLone and McLean and the four variable model

through several respecification steps that emerged from previous stages of analysis. Unlike

the two models that precede it, the three-variable model proposes that information systems

success is a three-dimensional model and the relationships between these dimensions are as

indicated in Figures 8 and 9. As such, this study has provided a new empirically test model

of information system success in public organizations.

The original six-variable model of DeLone and McLean and the four variables

model, nevertheless, were useful in giving the general frame that include the possible

dimensions of information system success that could exist in the organizational boundary.

Based on the preceding, the study’s research question was answered.

Potential Contributions and Implications

The study of information systems in public organizations will become very important

as we enter the information age in which usage and investment in public organization

information systems are expected to increase greatly. However, the study o f information

systems in public organizations is still in its infancy in both the public administration theory

and public management literatures. An emerging line of research called public management

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 146: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

130

information literature, which started in late 1980s, focused on the investigation of

information systems in public organizations. The literature review for the present study

revealed that most of studies in this literature are descriptive with very few conceptual

studies (Bozeman & Bretschneider, 1986; Stevens & McGowan, 1985). This study makes

an important contribution to this small body of literature.

The study contributes to public administration theory and public management

thinking in several ways. First, this study contributes to the conceptual side of the public

management information system by proposing a comprehensive model for evaluating

information systems in public organizations. This model took into consideration both inside

and outside actors, whose evaluations are crucial in assessing information system

performance. Part of this model was empirically tested in this study.

Consequently, this study is a further step in developing a theory of information

system evaluation in the public sector. Before this study, this issue was never investigated

in a more comprehensive manner. Thus, it extends the work of Bozeman & Bretschneider

(1986), Stevens & McGowan (1985), and other writers who attempted to conceptualize

public management information systems.

Second, the study also contributes to the practical side of public management. Public

managers are ever challenged in the information age. An important and significant

challenge involves how to evaluate the success of information systems, how to justify the

public resources allocation in these systems, and how to ensure the success o f information

systems in positively affecting individuals and the overall organization.

The conceptual model developed and tested in this study has direct implications for

the practice of public management. As a result, it can be called a “dual relevance

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 147: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

131

knowledge” (Kaboolian, 1996) or “practical theory” model (Harmon & Mayer, 1994).

Kaboolian (1996) defined dual relevant knowledge as that can benefits both the theoretical

side of the field and the practical side. Harmon and Mayer (1994) defined a practical theory

as the one that either illuminates possibilities for action that would not otherwise be apparent

or stimulates greater understanding of what the person has already been doing.

According to Kaboolian (1996), “one way to go about this [creating dual relevance

knowledge] is to ask questions of relevance to practitioners and test, evaluate, and develop

the insights of the disciplines in the course of the answering those questions”(p.80).

The three-variable model is based on the six-variable model of DeLone and

McLean. These authors developed their model by conducting a comprehensive review of

relevant empirical literature. Consequently, the three-variable model belongs to the type of

theoretical models described by both Kaboolian (1996) and Harmon and Mayer (1994).

Thus, findings of this study can assist public managers in dealing with the

challenges in the information age. The three-variable model and instruments developed and

validated in this study can be used to measure the success of existing information systems in

increasing the effectiveness and efficiency of both the performance of individuals and the

organization. Using the three-variable model and instruments, public managers could use

the results of the evaluation to provide empirical evidence to overseeing actors about the

level of success of their information systems and in turn justifying the public resources

investments in these systems.

Findings o f this study, furthermore, provide guidance on how public managers may

influence the success of information systems within their organizations. Findings of this

study indicate that Satisfaction is a key variable in the three-variable model. The empirical

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 148: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

132

evidence in this study suggests that as Satisfaction increases, Individual Impact and

Organizational Impact also increase. The Satisfaction variable measures the satisfaction of

information systems users with the quality of information, quality of systems, and their

overall satisfaction. Thus, public managers may positively influence the success of

information systems through increasing the quality of both the information and the systems

themselves.

Looking deeper into Information quality and System Quality variables, we could

argue that Information Quality reflects the needs of users for information that are necessary

to accomplish their work while System Quality reflects the technical needs and conditions

that should be in place for the information systems to have high quality, such as type of

cables and cooling systems used in buildings and the level of electric power available.

Thus, satisfying of the needs of the technical subsystem (information systems) and

social subsystem (users) could lead to higher level of satisfaction. Public managers could

satisfy these needs using different methods. For example, public managers could allocate

more resources toward buying more powerful information systems.

However, an effective and efficient method to do so should be introduced at the

design stage of information systems and that takes both the needs of both subsystems

(technical and social). In other words, to design a successful information system, public

managers should not follow the technological imperative model, which views technology as

the independent variable that determines other dimensions in an organization. According to

this approach, the introduction of high technology leads to increasing productivity of an

organization. This design approach could increase productivity in the short run; however, in

the long run it is doomed to fail. A logical explanation for why the increase in productivity

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 149: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

133

could occur in the short run is stated by Chisholm (1988) when he explained that”

suboptimal designs occur because the employees who operate most high-technology systems

bear the consequences of design decisions and must make the system work regardless o f its

designs”(p.4l).

The sociotechnical systems approach provides a way of achieving the joint

optimizing of both the technical and social systems within an organization. Several writers

have proposed and used this approach to design information system with great success (e.g.,

Chisholm, 1988; Hogan, 1993; Sharma et al., 1991; Shani & Sena, 1994; Purser, 1991;

Terlage, 1994). For example, Chisholm (1988) stated

The advanced information technology requires new strategies...and different organizational and workplace designs that emphasize the human attributes of learning, questioning, and deciding to reach the technology’s potential for contributing to organization effectiveness...The sociotechnical systems (STS) approach provides an effective way of working to improve total system performance through improved links between the human system and technology, (p.45)

Consequently, the STS approach is an effective and efficient design method to

follow in designing a successful information system in terms of fulfilling not only the needs

of the technical and social subsystems but also creating a flexible and adaptive information

system that responds to the environment.

Third, another contribution of this study is the development o f a survey instrument

that could be used as a foundation for future research in the public sector, which could

include any of the variables investigated in this study. This instrument has been validated

through rigorous process. This process included a comprehensive review of available

instruments in both the public and private sectors, back and forth translation process, pilot

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 150: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

134

study, consulting with experts in information system and public administration, factor

analysis, and test of internal consistency.

A fourth contribution of this study is testing a model, instruments, and a research

process that are based on prior research in the United States in a Middle Eastern country

(Kuwait). This study has reached similar findings to that found in the United States in terms

of the relationships in the study model and the results of the instruments validation. Thus,

the external validity of the model, concepts and instruments was enhanced by this study.

Future Research Directions/Suggestions

Several avenues of future research are suggested by the findings of this study. First,

while this study has provided much needed empirical support for the three variables model

of information system success, broader empirical support for this model is necessary and

needed. Thus, future research should test the applicability of this model in different types of

public organizations (e.g., non-profit organizations) and other societies.

Second, as stated through this study, this study represents a first step in developing a

comprehensive model for evaluating information systems in the public sector. Thus, a

logical extension of this study is to add external actors into the three variables model of

evaluating information systems. Equations in Chapter three could be used as the basic

hypotheses for this future research.

Third, because this study employed quantitative methods and only questionnaires to

collect data, future research should employ also qualitative methods. For example, actual

observation of users of information systems or interviewing these users may give valuable

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 151: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

135

insights regarding their satisfaction with these systems rather than just asking them

questions about their perceived satisfaction. Likewise, reviewing secondary data such as

individuals and overall organizational productivity reports could provide additional insights

regarding the individual impact and organizational impact variables. In other words, the

study would greatly benefit from some type of triangulation in the data collection method.

Fourth, findings of this study did not show positive associations or relationships

between System Usage and other variables in the model. As stated in this study, a possible

reason for this could be the measure used in this study to test for System Usage. This

measure was adopted from Igbaria et al. (1989). This measure relies more heavily on two

dimensions. First, the numbers of organizational function those information systems are

used in; second, the numbers of software package used in work related responsibilities.

During the pilot study and consultation process, several participants and Kuwaiti professors

have indicated that information systems have been recently introduced in their organizations

and these systems are not widely used in all work related tasks. Thus, this could be the

reason for the low usage of information systems and lack of associations of this variable to

other variables in the study model; which led, in turn, to dropping System Usage from the

study model. Thus, future research should use other measures of System Usage that do not

have such reliance (e.g., Kim & Lee, 1986; Sherman, 1997). This might lead to including

System Usage into the model.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 152: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

136

BIBLIOGRAPHY

Al-Jannaee, A. (1989). An investigation of leadership style and its effect upon

employee motivation and satisfaction with supervisors in public and private organizations in

Kuwait. Unpublished doctoral dissertation. University of Denver, Denver, CO.

Anakwe, U. P., Anandaeajan, M., & Igbaria, M. (1998). Information technology

usage dynamic in Nigeria: An empirical study. Journal of Global Information Management,

7 (2), 13-21.

Ang, J. & Soh, P. H. (1997, October). User information satisfaction, job satisfaction

and computer background: An exploratory study. Information & Management, 32 (5), 255-

266.

Anonymous. (2001). Facts and figures (Kuwait). In LEXIS.NEXIS Academic

Universe [Electronic database]. Available: http://web.lexis-nexis.com/universe/docu.

Asher, H. B. (1976). Causal modeling. Beverly Hills, CA: SAGE.

Bailey, J. E. & Pearson, S. W. (1983, May). Development of a tool for measuring

and analyzing computer user satisfaction. Management Science, 29 (5), 530-545.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 153: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

137

Ballantine, J., Bonner, M., Levy, M., Martin, A., Munro, L., & Powell, P. L. (1996).

A 3-D model of information systems success: The search for the dependent variable.

Information Resources Management Journal, 9 (4), 5-14.

Baroudi, J. J., Margarethe, H., & Olson, B. I. (1986). An empirical study of the

impact of user involvement on system usage and information satisfaction. Communication

of the ACM. 29 (3), 232-238.

Baroudi, J. J. & Orlikowski, W. J. (1988, Spring). A short-form measure of user

information satisfaction: A psychometric evaluation and notes on use. Journal of

Management Information systems. 4 (4), 44-59.

Bender, D. H. (1986, Fall). Financial impact of information processing. Journal of

Management Information Systems. 3 (2), 22-32.

Bikson, T. K., Stasz, C., & Mankin, D. A. (1985). Computer-mediated work:

Individual and organizational impact in one corporate headquarters. Santa Monica, CA:

Rand.

Blaylock, B. K. & Rees, L. P. (1984, Winter). Cognitive style and the usefulness of

information. Decision Sciences. 15 (1). 74-91.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 154: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

138

Bozeman, B. & Bretschneider, S. (1986, Special Issue). Public management

information systems: Theory and prescription. Public Administration Review. 46 .475-487.

Bozeman, B. & Straussman, J. D. (1990). Public management strategies. San

Francisco, CA: Jossey-Bass.

Bretschneider, S. (1990). Management information systems in public and private

organizations: An empirical test. Public Administration Review. 50 (4). 536-44.

Bretschneider, S. & Wittmer, D. (1993, March). Organizational adoption of

microcomputer technology: The role of sector. Journal of the Institute of Management

Science. 4 t il . 88-108.

Brislin,R. W. (1986). The wording and translation of research instruments. In

W. J. Lonner & J. W. Berry (Eds.), Field methods in cross-cultural research (pp. 137-164).

Beverly Hills, CA: SAGE.

Bugler, D. T. & Bretschneider, S. (1993). Technology push or program pull:

Interest in new information system technologies within public organizations. In

B. Bozeman (Ed.), Public management: The state of the art (p. 275-293). San Francisco,

CA: Jossey-Bass.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 155: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

139

Caudle, S., Gorr, W., & Newcomer, K. (1991). Key information systems issues for

the public sector. MIS Quarterly. 15. 171-188.

Cerullo, M. J. (1980, December). Information systems success factors. Journal of

Systems Management. 31 (12). 10-19.

Cheney, P. M. & Dickson, G.W. (1990). Organizational characteristics and

information systems: An exploratory investigation. Academy of Management Journal. 25

(I), 170-184.

Chisholm, R. F. (1987-1988, Summer). Introducing advanced information

technology into public organizations. Public Productivity Review. XI (4), 39-56.

Churchill, G. A. (1979). A paradigm for developing better measures for marketing

constructs. Journal of Marketing Research. 10. 64-73.

Conklin, J. H., Gotterer, M. H., & Rickman, R. (1982, August). On-line terminal

response time: The effects of background activity. Information & Management. 5 (3), 12-

20 .

Creswell, J. W. (1994). Research design: Qualitative and quantitative approaches.

Thousand Oaks, CA: SAGE.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 156: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

140

Cron, W. L. & Sobol, M. (1983, June). The relationship between computerization

and performance: A strategy for maximizing the economic benefits of computerization.

Information & Management. 3. 171-181.

Cummings, T. G. & Srivasatva, S. (1977). Management of work: A socio-technical

systems approach. San Diego, CA: University Assoc.

Cureton, E. E. & Agostine, R. B. D. (1983). Factor analysis. Hillsdale, NJ:

Lawrence Erlbaum Assoc.

Davis, F. D. (1989, September). Perceived usefulness, perceived ease of use, and

user acceptance of information technology. MIS Quarterly. 13 (3), 319-340.

DeLone, W. H. & McLean, E. R. (1992, March). Information system success: The

quest for the dependent variable. Information System Research. 3 (1), 60-90.

Doll, W. J. & Torkzadeh, G. (1988a, June). The measurement of end-user

computing satisfaction. MIS Quarterly. 12 (2), 258-274.

Doll, W. J., & Torkzadeh, G. (1988b, March). Developing a multidimensional

measure o f system-use in an organizational context. Information & Management 33 (4),

171-185.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 157: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

141

Edstrom, A. (1977, July). User influence and the success of MIS projects: A

contingency approach. Human Relations. 30 (7), 589-607.

Ein-Dor, P. & Segev, E. (1978, June). Organizational context and the success of

management information system. Management Science. 24 (10). 1064-1077.

Ein-Dor, P., Segev, E., & Steinfeld, A. (1981, December) Use of management

information systems: An empirical study. Proceedings of the Second International

Conference on Information Systems. Cambridge, MA 215-228.

Franz, C. R. & Robey, D. (1986, Summer). Organizational context, user

involvement, and the usefulness of information systems. Decision Sciences. 17 (3), 329-

355.

Gallagher, C. A. (1974, March). Perceptions of the value of management

information systems. Academy of Management Journal. 17 (1), 46-55.

Garrity, E. J. & Sanders, G. L. (1998). Dimensions o f information system success.

In E. J. Garrity & G. L. Sanders (Eds.), Information success measurement. Hershey, PA:

Idea Group Publ.

Garrity, J. T. (1963, July-August). Top management and computer profits. Harvard

Business Review. 41 (7) 6-12.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 158: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

142

Ginzberg, M. J. (1981, April). Early diagnosis of MIS implementation failure:

Promising results and unanswered questions. Management Science. 27 (4), 459-478.

Glorfeld, K. (1994). Information technology: Measures of success and impact.

Unpublished doctoral dissertation. University of Arkansas, Little Rock, AR.

Goslar, M. D. (1986, Summer). Capability criteria for marketing decision support

systems. Journal o f Management Information Systems. 3 (1). 81-95.

Hair, J., Anderson, R. E., Tatham, R. L., & Black, W. (1992). Multivariate data

analysis. Englewood, NJ: Prentice-Hall.

Hall, R. (1972). Organizations. Englewood, NJ: Prentice-Hall.

Hamilton, S. & Chervany, N. L. Evaluating information system effectiveness: Part I,

Comparing evaluation approaches. MIS Quarterly. 5 (3), 55-69.

Harman, M. & Mayer, R. (1994). Organization theory for public administration.

Burke, VA: Chatelaine.

Hendrickson, A. R., Glorfeld, K., & Cronan, T. P. (1994, July/August). On the

repeated test-retest reliability of the end-user computing satisfaction instrument: A

comment. Decision Science. 25 (4), 655-667.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 159: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

143

Hogan, C. (1993). How to get more out of videoconference meetings: A socio-

technical approach. Training and Management Development Methods. 7 (1), 5-21.

Igbaria, M. (1991). An examination of microcomputer usage in Taiwan.

Information and Management. 22. 19-28.

Igbaria, M., Pavri, F. N., & Huff, S. L. (1989). Microcomputer applications: An

empirical look at usage. Information & Management. 16 (41. 187-196.

Igbaria, M. & Tan, M. (1997, March). The consequences of information technology

acceptance on subsequent individual performance. Information & Management. 32 (3),

113-121.

Iivari, J. & Ervasti, I. (1994). User information satisfaction: IS implementability

and effectiveness. Information and Management. 27. 205-220.

Iivari, J. & Koskela, E. (1987, September). The PIOCO model for information

systems design. MIS Quarterly. 11 (3), 401-419.

Ishman, M. (1998). Measuring information success at the individual level in cross-

cultural environments. In E. J. Garrity & G. L. Sanders (Eds ), Information success

measurement. Hershey, PA: Idea Group Publ.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 160: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

144

Ives, B., Hamilton, S., & Davis, R. (1980). A framework for research in computer-

based management information systems. Management Science, 26 (9), 910-934.

Ives, B., Margrethe, M., & Baroudi, J. J. (1983). The measurement of user

information satisfaction. Communication of the ACM. 26 (10). 785-793.

Ives, B. & Olson, M. H. (1984, May). User involvement and MIS success: A review

of research. Management Science. 30 (5), 586-419.

Jenster, P. V. (1987, Winter). Firm performance and monitoring of critical success

factors in different strategic contexts. Journal of MIS. 3 (3), 17-33.

Johnston, H. R. & Vitale, M. R. (1988, June). Creating competitive advantage with

inter-organizational information systems. MIS Quarterly. 12 (2), 153-165.

Jones, J. W. & McLeod, R., Jr. (1986, Spring). The structure of executive

information systems: An exploratory analysis. Decision Sciences, 17 (2), 220-249.

Joreskog, K. & Sorbom, D. (1993). LISREL 8: Structural equation modeling with

the SIMPLIS command language. Hillsdale, NJ: Lawrence Erlbaum.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 161: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

145

Joshi, K. (1990). An investigation of equity as a determinate of user information

satisfaction. Decision Sciences. 12. 786-807.

Kaboolian, R. K. (1996). Sociology. In D. F. Kettl & H. B. Milward (Eds.), The

state of public management (pp. 75-91). Baltimore, MD: Johns Hopkins University Press.

Kachigan, S. K. (1991). Multivariate statistical analysis. New York, NY: Radius.

Kaiser, K. M. & Srinivasan, A. (1980, November). The relationship of user

attitudes toward design criteria and information system success. Proceedings of the Twelfth

Annual Meeting of American Institute of Decision Science. Las Vegas, NV.

Kanellis, M. L. & Paul, R. J. (1999). Evaluating business information system fit:

From concept to practical application. European Journal of Information Systems. 8. 65-76.

Kappelman, L. A. & McLean, E. R. (1991, December). The respective roles of user

participation and user involvement in information system implementation success. In J. I.

DeGross, I. Benbasat, D. DeSanctis, & C. M. Beath (Eds ), Proceedings of the Twelfth

International Conference on Information Systems (pp. 339-349). New York, NY.

Karahanna, E. & Straub, D. W. (1999, April). An exploratory contingency model of

user participation and MIS use. Information & Management. 35 (4), 37-250.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 162: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

146

Keen, P. G. (1981, March). Value analysis: justifying decision support systems.

MIS Quarterly. 5 ( 1). 1-16.

Khalil, O. & Elkordy, M. M. (1999, April-June). The relationship between user

satisfaction and systems usage: Empirical evidence from Egypt. Journal of End Computing.

i l (2), 21-28.

Kidder, L. H. & Judd, C. M. (1986). Research methods in social relations (5th ed ).

New York, NY: Holt, Rinehart and Wilson.

Kim, Y. & Kim, Y. (1999, October/December). Critical issues in the network era.

Information Resources Management Journal. 4 141. 14-23.

Kim, E. & Lee, J. (1986, September). An exploratory contingency model of user

participation and MIS use. Information & Management. 11 (2), 87-97.

Kim, C., Suh, K., & Lee, J. (1998). Utilization and user satisfaction in end-user

computing: A task contingent model. Information Resources Management Journal. 11 (4),

11-24.

King, W. R. & Epstein, B. J. (1983, January). Assessing system value: An

experimental study. Decision Sciences. 14 ( II. 34-45.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 163: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

147

King, W. R. & Rodriguez, J. I. (1981, June). Participative design of strategic

decision support systems: An empirical assessment. Management Science. 27 (6 ), 717-726.

King, W. R. & Rodriguez, J. 1. (1978, September). Evaluating management

information systems. MIS Quarterly. 2 (3), 43-51.

Kriebel, C. A. & Raviv, A. (1982). Application of a productivity model for

computer systems. Decision Sciences. 13 (2), 266-284.

Kriebel, C. A. & Raviv, A. (1980, March). An economics approach to modeling the

productivity of computer systems. Management Science. 26 (3), 297-311.

Larcker, D. F. & Lessig, V. P. (1980, January). Perceived usefulness of

information: A psychometric examination. Decision Sciences. 11 (1), 266-284.

Li, E. (1997). Perceived importance of information system success factors: A meta­

analysis of group differences. Information and Management. 32, 15-28.

Lich, M. K. (1998). Multiple regression and correlation. In L. G. Grimm & P. R

Yamold (Eds.), Reading and understanding multivariate statistics. Washington, DC:

American Psychological Association.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 164: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

148

Loelin, J. A. (1992). Latent variable models: An introduction to factor, path, and

structural analysis. Hillsdale, NJ: Lawrence Erlbaum.

Lu, H.-P. & Wang, J.-Y. (1997, April). The relationship between management

styles, user participation, and system success over MIS growth stages. Information &

Management. 32 (3). 203-213.

Lucas, H. C. (1981). An experimental investigation of the use of computer-based

graphics in decision-making. Management Science. 27 (7), 757-768.

Lucas, H. C. (1975a). Why information systems fail. New York: Columbia

University Press.

Lucas, H. C. (1975b). The use of an accounting information system: Action and

organizational performance. The Accounting Review. L (4), 735-746.

Lucas, H. C. (1975c). Performance and use of an information system. Management

Science. 21 (8 ), 908-919.

Mahmood, M. A. & Becker, J. D. (1985/1986). Effect of organizational maturity on

end-users’ satisfaction with information systems. Journal o f Information Systems. 2 (3), 37-

64.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 165: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

149

Mahmood, M. A. & Medewitz, J. N. (1985, October). Impact of design methods on

decision support system success: an empirical assessment. Information & Management. 9

(3), 137-151.

Mahmood, M. A. & Soon, S. K. (1991, September/October). A comprehensive

model for measuring the potential impact of information technology on

organizational strategic variables. Decision Sciences. 22 (4), 869-897.

Mansour, A. H. & Watson, H. J. (1980). The determinates of computer-based

information system performance. Academy of Management Journal. 23 (3), 521-533.

Marcolin, B. L., Munro, M. C., & Campbell, K. G. (1997, Summer). End user

ability: Impact of job and individual differences. Journal of End User Computing. 9 (3), 3-

12 .

Meador, C. L., Guyote, M. J., & Keen, P. G. W. (1984, June). Setting priorities for

DSS development. MIS Quarterly. 8 (2). 117-129.

Melone, N. P. (1990, January). A theoretical assessment of the user satisfaction

construct in information system research. Management Science. 36 (1), 67-91.

Miles, R. H. (1980). Macro organizational behavior. Santa Monica, CA: Goodyear.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 166: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

150

Millman, Z. & Hartwick, J. (1987, December). The impact of automated office

systems on middle managers and their work. MIS Quarterly. 11 (4), 479-490.

Munro, B. H. (1997). Statistical methods for health care research (3rd ed.Y New

York, NY: Lippincott.

Myers, B. L., Kappelman, L., & Prybutok, V. R. (1997, Winter). A comprehensive

model for assessing the quality and productivity of the information system function: Toward

a theory for information system assessment. Information Resources Management Journal.

10(1), 6-25.

Newcomer, K. E. (1991, September/October). Evaluating information system: More

than meets the eye. Public Administration Review. 51 (5), 377-384.

Palvia, P. C. & Palvia, S. C. (1999, March). An examination of the IT satisfaction

of small-business users. Information and Management 35 (3). 127-137.

Palvia, P. C., Palvia, S. C., & Zigli, R. M. (1992). In M. Khosrowpour (Ed), Global

information technology management. Harrisburg, PA: Idea Group Publishing.

Pedhazur, E. J. (1997). Multiple regression in behavioral research. New York, NY:

Harcourt Brace.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 167: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

151

Pedhazur, E. J. (1982). Multiple Regression in Behavioral Research (2nd ed.).

Orlando, FL: Holt, Rinehart, and Winston.

Purser, R. E. (1991, September). Sociotechnical system design principles for

computer-aided engineering. Technovation. 12 (6 ), 379-386.

Rencher, A. V. (1998). Multivariate statistical inference and application. New

York, NY. John Wiley & Sons.

Retherford, R. D. & Choe, M. K. (1993). Causal analysis. New York, NY: John

Wiley & Sons.

Rocheleau, R. (1999). The political dimensions of information systems in public

administration. In G. David Garson (Ed ), Information technology and computer

applications in public administration: Issues and trends. Hershey, PA: Idea Group Publ.

Sabherwal, R. (1999, Winter). The relationship between information system

planning sophistication and information system success: An empirical assessment. Decision

Sciences. 30 (11. 137-167.

Saunders, C. S. & Jones, J. W. (1992, Spring). Measuring performance of the

information systems function. Journal of Management Information Systems. 8 (4), 63-73.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 168: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

152

Schewe, C. D. (1976, December). The management information system user: An

exploratory behavioral analysis. Academy of Management Journal. 19 (4). 577-590.

Schultz, R. L. & Slevin, D. P. (1975). A program of research on implementation. In

K. Schultz & F. Slevin (Eds.), Implementing operations research/management science. New

York, NY: American Elsevier.

Seddon, P. (1997, September). A respecification and extension of the DeLone and

McLean model of IS success. Information System Research. 8 (3), 240-253.

Seddon, P.B. & Kiew, M. Y. (1994). A partial test and development of the DeLone

and McLean model of IS success. Proceedings of the International Conference on

Information Systems. Vancouver, BC, Canada (ICIS 94), 99-110.

Seddon, P. & Yip, S. K. (1992). An empirical evaluation of user information

satisfaction (UIS) measures for use with general ledger account software. Journal of

Information Systems. 5. 75-92.

Shani, A. B. & Sena, J. A. (1994, June). Information technology and the integration

of change: Sociotechnical system approach. The Journal of Applied Behavioral Science. 30

(2), 247-291.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 169: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

153

Sharma, R S., Conrath, D., & Dilts, D. M. (1991, February). A socio-technical

model for deploying expert systems: Part I: The general theory. IEEE Transactions on

Engineering Management. 38 (11. 14-24.

Sherman, B. A. (1997). Operationalization o f information systems technology

assessment. Unpublished doctoral dissertation. State University of New York, Buffalo, NY.

Sommerhoff, G. (1969). The abstract characteristics of living systems. In F. Emery

(Ed ), System thinking (pp. 147-202). Baltimore, MD: Penguin.

Srinivasan, A. (1974). Alternative measures of system effectiveness: Association

and involvement. Management Science. 21 (21. 178-188.

Stevens, J. M. & McGowan, R. P. (1985). Information systems and public

management. New York, NY: Prager.

Straub, D. W. (1989, June). Validating instruments in MIS research. MIS

Quarterly. 13 121. 147-165.

Swain, J. (1995, September). Issues in public management information systems.

American Review o f Public Administration. 25 (3), 279-296.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 170: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

154

Tait, P. & Vessey, I. (1988, March). The effect of user involvement on system

success: A contingency approach. MIS Quarterly. 12 (1), 91-108.

Teo, T. S. H. & Wong, P. K. (1998). An empirical study of the performance impact

of computerization in the retail industry. Omega. 26 (5), 611-621.

Terlage, R. (1994, Fall). Minimizing the risks in reengineering: A socio-technical

approach. Information Strategy. 11 (1), 611-620.

Thompson, J. D. (1967). Organizations in Action. New York, NY: McGraw-Hill.

Torkzadeh, G. & Doll, W. J. (1999). The development o f a tool for measuring the

perceived impact of information technology on work. Omega. 27. 327-339.

Torkzadeh, G. & Doll, W. J. (1994, January). The test-retest reliability of user

involvement instruments. Information & Management. 26 (1), 21 -31.

Torkzadeh, G. & Doll, W. J. (1991, Winter). Test-retest reliability of the end-user

computing satisfaction instrument. Decision Sciences. 22 (1), 26-37.

Trist, E. (1993). A socio-technical critique of scientific management. In E. Trist &

H. Murray (Eds.), The social engagement of social science (Vol. ID. Philadelphia, PA:

University of Pennsylvania Press.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 171: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

155

Van de Van, A. H. & Ferry, D. L. (1980). Measuring and assessing organizations.

Chichester, England: John Wiley & Son.

Vasarhelyi, M. A. (1981, December). Information processing in a simulated stock

market environment. Proceedings of the Second International Conference on Information

Systems. 53-75.

Walton, R. E. (1985). Strategies with dual relevance. In E. E. Lawler III (Ed ),

Doing research that is useful for theory and practice (pp. 76-204). San Francisco, CA:

Jossey-Bass.

Weisberg, H. F., Krosnick, J., & Bowen, B. D. (1989). An introduction to survey

research and data analysis. Boston, MA: Scott Foresman and Co.

Woodroof, J. B. & Kasper, G. M. (1998). A conceptual development of process and

outcome user satisfaction. Information Resources Management Journal. 11 (1), 37-42.

Yuthas, K. & Young, S. T. (1998, January). Material matters: Assessing the

effectiveness of materials management IS. Information & Management 33 (3). 115-124.

Zmud, R. W. (1979, October). Individual differences and MIS success: A review of

the empirical literature. Management Science. 25 (10), 966-979.

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 172: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

156

Appendixes

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 173: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Appendix A

English Version of the End Users Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 174: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

158

Section I: Please circle the most appropriate answer that describes your perception of the system Quality.

1 2 3 4 5 6 7Extremely X Quite X Slightly X Neural, does not apply Slightly Y Quite Y Extremely Y

1. How do you evaluate the elapsed time between a user-initiated request for service or action and reply to that request?

Fast: 1...2...3...4...5...6...7: Slow Consistent: 1...2...3...4...5...6...7: Inconsistent

2. How do you evaluate the ease or difficulty of utilizing the capability of the computer system?Simple: 1...2...3...4...5.. .6 . ..7: Complex Easy-to-use: 1...2...3...4...5.. .6 . ..7: Hard-to-use

3. How do you evaluate the set of vocabulary, syntax, and grammatical rules used to interact with the computer system?

Simple: 1... 2. ..3. ..4. ..5. ..6. ..7: Complex Easy-to-use: 1...2...3...4...5...6...7: Hard-to-use

4. How do you evaluate the relative balance between the cost and the considered usefulness of the computer- based information products or services that are provided? The costs include any costs related to providing the resource, including money, time, manpower, and opportunity. The usefulness includes any benefits that the user believes to be derived from the support.

Positive: 1...2...3...4...5...6...7: Negative Good: 1...2...3...4...5...6...7: Useless

5. Your feelings of assurance or certainty about the systems provided.High: 1...2...3...4...5...6...7: Low Good: 1...2...3...4...5...6...7: Bad

6. How do you evaluate the capacity of the information system to change or to adjust in response to new conditions, demands, or circumstances?

Flexible: 1...2...3...4...5...6...7: Rigid High: 1...2...3...4...5...6...7:Low

7. How do you evaluate the ability of systems to communicate/transmit data between systems servicing different functional areas?

Sufficient: 1...2...3...4...5...6...7: Insufficient Good: 1...2...3...4...5...6...7: Bad

Section II: Please circle the most appropriate answer that describes your perception of the Information Quality.

1 2 3 4 5 6 7^xjrenKl^^^^^jitOC^^^^^^lightljrJ^^^^^^jeuraj^doe^ioyggl^^^^lightljJ^^guit^^^yflreineljJf^^^

1. How do you evaluate the correctness of output information?High: 1...2...3...4...5.. .6 . ..7: LowSufficient: 1...2...3...4...5...6...7: Insufficient

2. How do evaluate the availability of the output information at a time suitable for its use?Timely: 1...2...3...4...5...6...7: UntimelyConsistent: 1...2...3...4...5...6...7: Inconsistent

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 175: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

159

3. How do evaluate the variability of the output information from that which it purports to measure?Consistent: 1...2...3...4...5...6...7: Inconsistent

High: 1...2...3...4...5...6.. 7: Low

4. How do evaluate the consistency and dependability of the output information?High: 1...2...3...4...5...6...7: Low Sufficient: 1...2...3...4...5...6...7: Insufficient

5. How do evaluate the age of the output information?Good: 1...2...3...4...5...6...7: Bad Adequate: 1...2...3...4...5...6...7: Inadequate

6. How do evaluate the comprehensiveness of the output content?Complete: 1...2...3...4...5...6...7: incomplete Sufficient: 1...2...3...4...5...6...7: Insufficient

7. How do evaluate the material design of the layout and display of the output contents?Good: 1...2...3.. 4...5...6...7: Bad Readable: 1...2.. 3...4...5...6...7: Unreadable

8. How do evaluate the amount of information conveyed to you from the computer-based systems?Concise: 1...2...3...4...5.. .6 . ..7: Redundant Necessary: 1...2...3...4...5.. .6 . ..7: Unnecessary

9. How do evaluate the degree of congruence between what you want or require and what is provided by the information products and services?

Relevant: 1...2...3...4...5.. .6 . ..7: Irrelevant Good: 1...2...3...4...5.. .6 . ..7: Bad

Section III: Please circle the most appropriate answer that describes your usage of the information system.

1. On average working day that you use a computer, how much time do you spend on the system?(1) Almost never (2) less than Vi hour(3) From Vi hour to 1 hour (4) 1-2 hours(5) 2-3 hours (6) more than 3 hours

2. On the average, how frequently do you use a computer?(1) Almost never (2) Once a month(3) A few times a month (4) A few times a week(5) A bout once a day (6) Several times a day

3. With respect to the requirements of your current job, please indicate to what extent do you use the computer to perform the following tasks: (Please Circle one)

Not at all To a great extent1 2 3 4 5

1. Historical reference1 2 3 4 5

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 176: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

160

2. Looking for trend historical reference1 2

3. Finding problems/alternatives1 2

4. Planning1 2

5. Budgeting1 2

6. Communicating with others1 2

7. Controlling and guiding activities

1 2

8. Making decisions

4. With respect to the requirements of your current job, please indicate the number of packages you use from the following: (Please Check)

(1) Spreadsheets. ( ) (2) Word processing. ( )(3) Data management packages. ( ) (4) Modeling systems. ( )(5) Statistical systems. ( ) (6) Graphical packages. ( )(7) Communication packages. ( ) (8) Own programming. ( )(9 ) 4GL ( ) (10) Others ( )

Section IV: On the following, Please circle the number which best reflects your overall

1 2 3 4 5 6 7Extremely Quite Slightly Neural, does not apply Slightly Quite Extremely

(1) How adequately do you feel that the system meets the information processing needs of your area of responsibility?

Adequate: 1...2...3...4...5.. . 6 . ..7: Inadequate(2) How efficient is the system?

Efficient: 1...2...3...4...5.. .6 . ..7: Inefficient(3) How effective is the system?

Efficient: 1...2...3...4...5.. .6 . ..7: Inefficient(4) Overall, are you satisfied with the system?

Dissatisfied: 1...2...3...4...S...6...7: Satisfied

Section V: Please indicate the extent to which information systems has impacted your job in the following:

1 2 3 4 5Not at All A Little Moderately Much Great Deal

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 177: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

161

Task productivity:(1) Information system allows me to accomplish more work than would otherwise be possible.

1 2 3 4 5(2) Information system increases my productivity.

1 2 3 4 5(3) Information system application saves my time.

1 2 3 4 5Task innovation:

(4) Information system helps me try out innovate ideas.1 2 3 4 5

Customer satisfaction:(5) Information system helps me meet customer needs.

1 2 3 4 5(6) Information system improves customer satisfaction.

1 2 3 4 5(7) Information system improves customer service.

1 2 3 4 5Management control:

(8) Information system helps the management to control the work process.1 2 3 4 5

(9) Information system improves management control.1 2 3 4 5

(10) Information system helps management control performance.1 2 3 4 5

Section VI: Demographic Information.1. Your job title........................................................................................................2. How long you have been in this position?...............(Years) (Months)3. Your gender: ( ) Male ( ) Female4. Your age: ( ) Less than 20 ( ) 20 to 29 ( ) 30 to 39 ( ) 40 to 49 ( ) More than 495. Your education: ( ) Less than high school ( ) High school ( ) High school and some college (

) Bachelor ( ) Master ( ) Doctorate6. Are you: ( ) Kuwaiti ( ) Non Kuwaiti7. How many years you have been working in government?

( ) Less than one years ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years( )21-25 years ( ) Over 26 years

8. How many years you have been working in your current agency?( ) Less than one year ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years (

)21-25 years ( ) Over 26 years9. How long you have been working with information system (Computer)? ....(Years)....(Months)

End of Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 178: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

162

Appendix B

English Version of the Management Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 179: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

163

Section I: please indicate the extent to which information a system has helped your institution in the following:________________________________________________1 2 3 4 5 6 7Not Much Extensively

(1) Distinguishing your institution from similar institution.

Not Much: 1...2...3...4...5...6...7: Extensively

(2) Reducing administrative costs.

Not much: 1...2...3...4...5...6...7: Extensively

(3) Improving the efficiency of internal operations.

Not much: 1...2...3...4...5...6...7: Extensively

(4) Enhancing the institution's reputation.

Not much: 1...2...3...4...5...6...7: Extensively

(5) Enhancing communication with other organizations.

Not much: 1...2...3...4...5...6...7: Extensively

(6) Enhancing and improving coordination with other organizations.

Not much: 1...2...3...4...5...6...7: Extensively

(7) Improving decision making.

Not much: 1...2...3...4...5...6...7: Extensively

(8) Making the institution successful overall.

Not much: 1...2...3...4...5...6...7: Extensively

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 180: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

164

Section II: Demographic Information.1. Your job title.......................................................................................................2. How long you have been in this position?.............. (Years)........... (Months)3. Your gender: ( ) Male ( ) Female4. Your age: ( ) Less than 20 ( ) 20 to 29 ( ) 30 to 39 ( ) 40 to 49 ( ) More than 495. Your education: ( ) Less than high school ( ) High school ( ) High school and some college

Bachelor ( ) Master ( ) Doctorate6. Are you: ( ) Kuwaiti ( ) Non Kuwaiti7. How many years you have been working in government?

( ) Less than one year ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years( )21-25 years ( ) Over 26 years

8. How many years you have been working in your current agency?( ) Less than one years ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years

( )21-25 years ( ) Over 26 years9. How long you have been working with information system (Computer)? ... .(Years)... .(Months)

End of Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 181: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

165

Appendix C

Letter of Approval from the Human Subjects Committee at Pennsylvania State University

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 182: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

pennStateV ice lY o i i le n i fu r R e se a rc h

O f fic e fu r R eg u la to ry C o m p lia n c e

T h e P e n n sy lv a n ia S ta te U n iv ers ity

2 1 2 K e m G ra d u a te H uiU ling U n iv e rs ity P a rk . PA IMMl2*.VUll

4 K14 1 X A S4775F a v iH U lX f iJ - W iW w a w .rc s c a r t h. psu .eduA ire /

From:

Dale:f...ekel. Dirccjcfrol Regulatory Allairs

llclaicl M. I*. Alnuilairi

Subject: Results of Review of Proposal - Expedited (O RC #001)0461*00)

A pproval E xniration Date: A pril 21,2001

“Evaluating Information System Success in Public Organizations: The Seven Dimensions Model”

The Behavioral and Social Sciences Committee of the Institutional Review Board has reviewed and approvedyour proposal for use of human subjects in your research. This approval has been granted fo r a one-year period.

Approval for use o f human subjects in this research is given for a period covering one year from today. If your study extends beyond this approval period, you m ust con tac t this office to request an annual review o f this research .

Subjects must receive a copy of any informed consent documentation that was submitted to the Compliance Office for review.

By accepting this decision you agree to notify the C om pliance Office o f (1) any additions o r procedural changes th a t modify the subjects' risks in any way and (2) any unan tic ipated sub ject events th a t a rc encountered du rin g the conduct o f this research. P rio r approval m ust be obtained fo r any planned changes to the approved protocol. U nanticipated subject events m ust be reported in a timely fashion.

On behalf o f the committee and the University, I thank you for your efforts to conduct your research in compliance with the federal regulations that have been established for the protection of human subjects.

C A Y /jhn

cc: K. EnglishR. Chrisholm S. Peterson H. Sachs

An Lquul Opjswtumiy I'mscrsiiy

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 183: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

167

Appendix D

Letters of Approval from Participating Ministries

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 184: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

KUWAIT UNIVERSITYCollett of Administrative Sciences

4 m V>

r.j:. ,.t < tt-.ui i . .» ac=d*r3»tihth

Omce of the DeansllUXlvtl' V/*1, t-

_ 2 ^ _ j UK2Wi K j>u jiijru:1000/6/13 \^ ix

' J-J—i- • Jjt$ll(a2jt j l l S jI j j >■■■!! i

<(( J u a t^ J e ^ ', y • • •

iLa yjkt’ Jum — JJ* / Jujl Qjy

^ Jlj ,.ljj 2j_f> ^C- — I <ij\}}\ — i-U p—il

S— Ij J < J L k -U li ^ » U I o f y j J w.U-11 L S '. s p ^ i ^ j i ) ,

^JuJI hlotJI | Jy—«■•>•

> > -

'j jy j i S j '

("*'< v tr

<S j& J <- * r T > s

l J .V > ^ ■ ^ £ [<> - '

. . . V \ V ' ' : a h cT • • ' • T T 5 5 \ a n j r r a \ 5 \ j U Cc - ji l l H 05S ; u - j l *163 T»TAtVVuJiU.T- • 5 ^ .U t • TT5M JlUjTt a \ • V ■ X OjiE

Tel. 251(11(11 - Operator 2523911 Ext. 3001 - Fix. 2528477 - P.(V Hot 5486 131155 Kuwait

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 185: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

MINISTRY OF INTERIOR iJU-IjJl SjljjGmmwl A4mU*nttm DtpartoMt U U il ijb ? l

D W ii .T*** ^ I V

RaCNa. :C<jU'. v o v ^ N o

f j 2 * l l ^ aU JI m J « ^ U J I ^ > m JI

MLa|J«a|l 0 .«.»•» J*-k jl jf'—fa i i » i all •* ; ^“J1 Jil ih ijl ^

^j^ll Irfjljll 11 jH Uiti I La_iLy e ltil ^LrikHlfl ^ I t UAI^lf | C t U iM J i j l l j | l^ iU jll* iT i ■ % / \ / \ X

• Ce^^b I* i>* «-"■* u1- u 5J*

^1* u* ■‘■‘ t/J* jp b ^ b U»U»1flj ^UJLf J .A I *JI 1*1,Ui1 ^ L J I 4^l^2 ^ iJ I o U i^ J I U l i J - J i i j (JCSjU

. i-j-uai i-ij^ut«<«« >ilj J>*lf IjlUiTj

fLt<-j IjVI J j^ L I UUJI ijbVI( •.i| . „ n . i i : r i r *)

• O l U ^ i J l ^ u t y l S f J I L L a i i i k U l j i l

■iklU^a i*il *> *fM jy(j*a ll M * * * / ! * 1* 1

. JU—Ull/Ui-i /u“ ■

Opr: 4144173 - 4M4613 - FU.4M2697 IAmAV:i r *U .lA U W Y .tA tt’l \ r J1UJI

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission

Page 186: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

MINISTRY OFJNTERIOR

RtCNo. —

SJL>UJI ijfjj l » i '^ L U iiip iijb y i

' fM ofttTY '.' '

■••• c*' 'S' cp.-.

fJ3ai U . p 4 J I I j l l ^ U I I i j b f i f L• ^ J u . J - f - J I : I

mU #J m|I *■!»» » » i i J#-* jUffSall dl «»m |I

s wl

i , « t j »,■ f j l * II U S h a . g ^ ll i > «!■f ' t t t f i l ty e ltil jjljj* »lll ^t>«5 U l l j i l ( U « U m i j A j M j ) *jJI* i lA f t« » > /y \T ■

• j c ^ L J i j i l j C e J j i v A l J + O -* ,j t- J I 13+*

« i* tP J U p i . . -. ..VI Ilk t i i j i J i i y i j U L » * lj ,0*11* J .1 A -JI ^ ' i i ' ' < J * * k 2 .U il . i~ L * J I ^All ^ L + J U J I U l i J - J H j ^ S i j U l •

. U ^IU ll U O j+ U

MIM t5< U*Oll > 'j Jj*** ljl..Ai«j

I UUil JjWI

■ *)

• ts iu ^ c« fjii yUfli Uwi i atf lUjii

; .A ^i« ||/U i-i- M /u»

Opr.: 4144172 • 4444613 •Fu:4442697 l A t T I ^ V l p - f U . t A U W T . t A t t l i r DUJI

< • /

with permission o, the copyright owner. Former reproduction prohibited without permission

Page 187: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

MINISTRY OF INTERIOR -u W -U l S jljjGmmi ArfmlitontlMi D m I i-U )l Sjb>l

Data: !••• jW T Y

a ^ \ o r ^ e v

f> a i t j »«u i—uji ijijVi fu j- j- j ^ uji jh*_ji

fa? n jji*S JfA >fiijti n ■! jl i « y i ^ 1 al

| | j ^ l j Ij VI ^^kJI ^ U I m c w jU I L jk a « LjjpW* Uil^M ( Ca } <JU ^UmT a a a/^/\T

a ijlj>JL igjj*inll (^1 4 Ijl ^1* yUJI li+f

jut J k j\ Jijt-* «afi IX* c-j>: ^ J iljiij u u v i j (OftiL j* m ii u4A.jiiiLri #Im I ^ L J I i ^ l j ^ 1*11 i h l ^ i J I ^ t i t J J t g

a 1 - j J k i l i-.lj.O JIIIM >*J J j - * - W'Aitj

fU j - J - ^

o jL y i j j^ u j i-u jiijL v i

h O z z 'j-* >

. t i l U j . . , C U . j j i l v l i £ J I > • U _ J : d . U * ^ l l

• aStlUa JjJ - Al i Ifl*#4 /. i _ _ L J I / l_ i— i

O pr-4144172• 4144613• Fu:4t42697 l A t m V ^ l i . t A U W T . I A t f W r illxJI

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 188: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

MINISTRY OF INTERIOR iJU-IjJl i j l j j

OiiitBetNe. T*** T V

* N f \ U rJerJt

f*j—J*1 j j j — U 1— UJI SjIjVI f U j - j - J^bliJI » : - »

• ^>*lll Mf#>i ; » » B J#-* | | ■ ! <|l

Jl J -

Cj^ 1 Ifl '■ < 1- - 1S Ui~U . U J I C ,U t S ~ i l U i l j U ( < i » U ^ i J M ' j ) * ^ u y U f T . . . / V ' T

* C g l j * n « I I £ l 2 J I l i f d

y ^ » J i j l l u U - * 5 — V I I i - C i j j i J i l j i l j l i L - V l j f U l l . J . ^ i - . n 'S + J * I J J

~ .LSI ^ u j i «-.!>; ^ u i a U x j i u u J - J i 5 j f iS jW. L * L a i U j j i i

• l l l l | £ | l . ‘ » '» h J j ^ Ij l . i f i j

fU J - J - . /v

( ^ 7 3 7 1 ^ 0 U U I ijLVI

4 * M j & <& /■& >

• U i l > - i l y U i j l l * t M ! i l l l A i j l l

y j J • J i t l i j J I j y i j A i J I

.Jk_UJI/Ui-Af-t A*“ • -*

V .____________________________________JOpt.:4144172• 4144(1)«Fu:4l42697 l A m W . ^ l i . i A t l WT . l A l f W r 3LU»

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 189: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

KmnttUntanttjC*MAtt*Mwld«M

«2y^Ol3Le«l^1 :**

in JweLwlwSLr • • *

Ca jjt* Juu — Jail Ji* / JmII irfiil otLlaaKt gjj J* &IjU UJ<

^ ^ l i , J s v - f . j * - ^ - > J I U .W : ^ j b > \ p U l S j T - i - U I i j b V l ,— 1

« _ > t j j l ) * J L J a i i-£ * i

.Uj-il

<«< tl* J l £»w j j ^ j U ^ jTj/U

2j ^ l jy»P • •

j-** li/* j j SjH

<s«iCnu>t 13068 ju-ji tut . r«mn ui i pBT* 2323911 • P.O.Boa S4M SMI,Cod*No. 13066 Kunl

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 190: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

KUWAIT UMVEKSTTY W j M M*b>Cdhyrf««lihtr»HwSriif Fjfefl ^

OtBttaf tfca D«M imW i n*«

(■>» * &»J Jjljj j £ j> -

((( Jwj i J f

m j "-*■ jyw — Jtll Ji* / jjJl otLl»Vl gj/ j^H (/y

—c*^t i*»W: iyUj/i fjUi - *#ui tjbyi |m *

U j j l l >U1 «-.»A J) < » j^ j J 11 u Im II j j y Laf*< Ip j l l j ^ / l y

.Uj-lt • •

«<i

iJSJl►

■Ua*

C>U110SSUi Ji«lA'\v^ .. T*TAIWw(U.r"t^UiT*mnUir.T«\-tMT«t 3910101 • O p m w 2323911 E it 3001 • F o . 232*477 • P.O. Box MM 13033 Kawtil

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 191: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

KUWAIT UNIVERSITY Coikft afAdalaistnifreScieaca

175

Office of tfc*D«aa 1 wS

UU.I t j \ j j SjU

<<< * V * rsu * u

2m« j -,e' Jmh — JJji / JmJV S^lil C»tLl«Vl £jj SSiljll

j ilj ^ Su J* - c~/3l fjUt SJT - oUl JjbV> pjb

Aatfljjll <Ubeto"»U»l W^LJt Ohlp JJ WLwllI JJii >*»/

<<<yJJI ii> JI Q^»S J * j - » - O S ^ "

*rjjaiJ055Wi-l\«UAy.w.-T»TA»VYkrtU.r--\^iUT«TTM\yLf.T»\-\-\i >CT « l . 3510101 • O p e r a t o r 2523911 E * » . 300t • F i x . 2528477. P 0 lint 118/S I W « K m * * . /

Reproduced w » pennies,on C - c o p ^ n , owned Rudder re p ro d u c e p r o v e d wddou, penn iss io .

Page 192: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

KUWAIT UNIVBBSnY'm m t

dgleM iaieCy '•

r-.> 1

b

f

' ' • ' H*

^ 0 1* *J4* “ «->$$* “ '**' UV* p-Jj

U j i ^ J t * *u i^ .u i< »i; ^ t* u -H iiS 'l i:.Hr; l^.

.u u ir

• •£*

o-** ( ^

■’1'in‘ff

■ T : v - ; $ g

. !

•.;*. /" ' *v’

A'U&SK-, i l U f A « * * f f .t

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 193: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Ku w a it u w v r a s n rCifcpiMtaM—iwlitawi

K I #

• :• • • 'i^ S S E ® .

r ^ n• j;;i _

(MBflatftteDaaa 'ii •: • i »; ‘ ♦>c

'■ "■ •■ • ‘:;Vv: . - . ■ . '*5' "

1

np&

/J -J l , U l i f V l , - J t i > .* t

1

• * .’•*(«l ■ ' .'.

I - ‘1 ’*. ISj :• •i .

^ J* v*»Wjr fj **' 4 ^ ~ ijL>)h |m*

jll <uLle *Li «i^Ul <«*rlj j j 11 wjU*1I UtTjJ ji jj*y LJ*i ijSjll *,

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 194: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

Kuwait UniversityCsIiqi of MrinMMiw SdMOH

vno*n3\^A

* j \ j j d f j

. 0((t jm j <.«W SL£

. • • •

£« JJl* / Ju«Jl a^U»1 oCL2«»Vl iujjll

^ t‘ji* ,.llj y* *■«■> j Ip — C -i^ l SjmI^ Vijb)ll f>UI - *-»Ul |« —<i

**eljJl4uLl» *LII j j 11 ^lU^ill *j& J J j l L ^ i j

.U*U • •

3 J^ \ JmP • •

I jj* ju£jM

* u 0 1 V > J 1 3 0 6 9 ^ j o « J I > ^ J I . J U - J I a l A I v w * - T a m i l U l o f l l C i^ O

T oL 2 3 2 3 9 1 1 • P . O .B o x 9 4 M S M C o d a N o . 1 3 0 9 5 K u w M

Cy^ttllLeeLg*»JI«-

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 195: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

179

Appendix E

Signed Letters from the Translators

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 196: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

To Whom It May Concern

Subject: Arabic and English Translation of Two Questionnaires

This is to certify that the undersigned performed the Arabic and English translation of the two questionnaires used in Mr. Helaiel Almutairi’s study.

Date: April 10,2000

Signature: ___________ _ / ^ Y '

Mohamed Mahdi Mohamed MPA, ABD Research Assistant, School of Public Affairs Penn State Harrisburg

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 197: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

To: Whom It May Concern.

Subject: Arabic and English Translations of Two Questionnaires.

This is to certify that the undersigned performed the Arabic and

English translation of the two questionnaires used in Mr. Helaiel

Almutairi’s study

Date: Lj-~ \ f o

Signature:

MAM & DIRECTOR OF THE ISLAMIC CENTER

MA8JIDAS-8ABEREEN

ADEEB F RA8HEED

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 198: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

182

Appendix F

Arabic Version of the End Users Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 199: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

oL «jU ll ^Uai k U T j i-P^J JL»..iT J-tf ^Jl i»W^l f • J jV l (*—~*Jl

• J*y * oJL Ij j l - ^ l jppj l . [yj^o-S-14^/1 <-—«il^-l]

0 L1 jjUit ^Uaj £jA ~A'~ * oJl>- j l oL* jJjo iw. I la; Ux> ^Li (jy L* ^ ^

oJjA-i j i oL*^L«il oJLa jU <lyA>- j

i a;»«o i£<Al ( ) -k-jl* t5*^ ( ) 4 ^ t£*^( ) • £*■/•''

J M V jl Jb> ( )

vjyw. tiJd ( ) k -> > t^Jd ( ) J U i*.xl( ) : *4 *

^oL»_ji*ll flk i ^ Si^apjil j l s^' • ”*» ^ .T

(J-& ( ) ( ) 4 ^ <*^( ) : J**"

^ j i J b > ( )

\J».nJt* l5-&1 ( ) k * c £ - k ( ) 4 "

? o l * ^ U a i * A Jb » s« JJ y * 'jS l ' ) j 3 o l * l £ U y * t» . T

UL^-i* (J-k ( ) k-« yA ( ) 41p i£J i( ) '. J$-<

j k , 'i j t J b > ( )

l i .o j* ( ) Ja-iji* c£-d ( ) 4 U c£«k( ) !<--**<»

^U aJl Ii* • »«^ j^Jl sJJLiil j oL»yL«il ^Uai L* j y * I* . 1

t5Ji ( ) i£Ji ( ) 4U c5-k( )

j k . N j i J b > ( )

tS-ti ( ) )a**yA ( ) 4 U <J>ji( ) :,^k*

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 200: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

184

fUaj o U j£- dJdil L».o

c£*d ( ) ^ jd ( ) J u ^ a l( ) :^U

.y k j V jl Jblat ( )

cS-d ( ) Ja^y* (^ui ( ) (JJli( ) ;(yii-^»

?aJjJjp oUJoL* _jl aJb-Of aS " \ji jjJc ll o t* jin il liaJ C->Lil5^V ^1 - -‘' yk L» . 1

i£-d ( ) Ja— y» ( ) ^lp <^jd( ) :,JU

j j a i N j l JuLst ( )

cS-d ( ) Ja— yj> ( ) J l p cS-d( ) \

? o l i—<!•'<»■ ijv< L* o l^ jL d l J i j Jai jJ CjL» ^llaj c jUI^»^[ l I - j* yk U» . v

iA-»-h <jA ( ) Ja—y« (jjd ( ) J u (^o i() :xsp

jd a i V jf OiU ( )

■-M-* tS-d ( ) Ja->> c£-d ( ) J lp ^ J i ( ) : JM.

oLajiail OPyJ <lL ~ £ ^ J-tf ft»l ( ) ajLil JJ f ^ 9 - j l :^U l |*-~aJ|

.J ly* j £ j O ^ l j jL&il J ? e j\ . k-—x<U-|] oL*jI*il ^Uaj

Ijsb JA Ory*i»J.I o U ^ * il O udl dl»*-Ju yk L» . ^

<-»s»d» t5-d ( ) ,k-.yi ^ .d ( ) ^Ip ^jA( ) :^ U

j j a i V j i Jbbt ( )

t5J i ( ) Ji«<y (^Jl ( ) ^ lp ^Jki( ):yaoi*»

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 201: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

91 j 4j 1]a> oL>jinil sJla <ui ( JJl cj_j3Lj SjjLL* <>yU—il y_yJ ^a L*. T

i-»w» csjJl ( ) Jx -ji. (> 4 U

J jk i ^ jf Jb> ( )0

Ji>J» c£Jd ( ) c£-d ( ) JIp i£-xl( ) : Laj ^ —.b* jjp

^L Jlp J jMoi-l o lijl* il j opyu~ .ll oL»jl*il jy L» b r jjJ jX»...iiI y U .f

i j i;*^ c£«xl ( ) ti-li ( ) tS ^ ( ) • jsi^

jJx j ^ jf -bl* ( )

c£-^ ( ) Ja«* ji» t5*^( ) • j +0

? OryU~~Ll Ol*_ji»ll aJI-La« J OU) wU-«ij _jA L* . &

tSJi ( ) Jx- ja. t jJL ( ) J lp i£-d( ) : J U

jJ®* v y JbU ( )

jdL ( ) t5Jd ( ) J b (^Jd( ) :

*i OryU~Jll Ol>_jl*il ijlJli- b b —ij _jA L« . 0

a.nj> (Jjl ( ) ix~*y* <«£-d ( ) ' ^Ip l£*x1( ) • •X-*’

j Jxj m j i ju> ( )

■k- yA ( ) J lp i£-Xl( ) : [fr*

? OryU.«Al ob»jl»il b3 ~..i~.l y L* . "\

uA-<w» <„s-d ( ) Jx-^1« t^jd ( ) J U t5jd( ) :J*U

jd u . ^ jl Jubt ( )

Uu*d» t5Jl ( ) J a i £ - d ( ) JIp ) : J-»ti j*p

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 202: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

186? o ryu-A l o L * > JJ ^ s s i* y* I* . V

i£-ll ( ) c£ji ( ) t5-^( ) • ■***•

jiu . ^ j t Jbbt ( )

>_>■*«<» ( ) Ja-«y tS-1 ( ) tS-^( )

O r Ol » j L I l 0* 1 >.1 «jj*» y ^ 1> • A

( jA ( ) A~tyA t£ ji ( ) <^Ai( ) \ipryA

j j u . V jl JbU ( )

l£ jl ( ) L$ji ( ) l£ jl( )

j oli_jLo ,j-* oL*jJLuLl Uii Owj L* j v_«P j y Jj J jjy L* i>_.. iLlll OrjjJ ..<» _j> ^

uuaJ> i s A ( ) J x -y * < jA ( ) ) :s — L -

j , k j ^ ji ja* (>

Uunja cJJlI ( ) t j A ( ) JIp l£-^( ) jJr

^UxJ J-U ^Jl ilsr^|l t»l ( ) o £*0y. { J1 Ivi-ll-M |» «««aII

.J ly * j £ j iJb rlj iib rl _>*■ j l • o t * jU i l

? O l*jin il ^Uii o Jai J j j l Or*_j)l J» " o Ip Lo ^

^ lil '/* J* ( ) ^ # 0 * J 5* ( ) 4*ia«-l V( )

o U L - r ^ ^ ( ) o U L r -T j / ( ) o Ip L . T - > ^ ( )

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 203: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

187? o L » ^ U a i I f j ^Jbici-J ^ ll o l j i l JAp 1 Ja—< jd L . T

I f(j oly* £*a< ( ) <J| «a»-Ij «y ( ) V ( )

(jJl 4 o ly . Jap ( ) f_*!l 4 v ( ) 4 °Lr* £**< ( )

l£A* (^lp JAj ^ ll 'Olpr' l fL»l ( ) #jl*>i| £**J t\»e J>\ llflAAJ g jl iiJij^Jl oLlialC. ( j i o L_i . r

:<JL)t oLUoJl ^ [ y j er»S'/lJ^|l v_—*U-l] old_jl*l! ^IjaJ i2JUlJb*0«l

. l>A.i,J tjr * j l olJLeP ^ ip (I )

) -la— .p t5-^( ) 4 ^ ) '■L*r 4 1*

4<«JbiOwil 'il ( )

4 ;*** 4 ^ • IP' LiAJ j l sOAJL%P 4A^l yd tOjlpL l

. i j la l JS'LLa iL>jA>- 41 2)yiidi k .-j-

t£jd ( ) cS-d ( ) J lP (^jd( ) lAsr J lP t j j l ( ) :-uAs*i«.i

c - i V ( )

• J ( o )

C$a1 ( ) -k -y * t^Ai ( ) 4 lp (^Ai( ) IJbc J 1* t£ -^ ( ) :<*Api*i-(

**j m J \ y ( )

.JaJa^ll 4 (d>*)

t f A ( ) Ja-^» tJAJk ( ) JlP cSAl( ) lA*. J lp (JJ.1 ( ) :<uA?W-.i

*i~i ^ ( )

.o L Ijjll J - p (£ )

q i .hJp (. aI ( ) Ja— jO» (JAi ( ) J lp (^Al( ) lA»r J I p <^Ai ( ) IddAdwl

po-i V ( )

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 204: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

£• jLflj'V* (^ )

l— «*» ( ) ja~1 yA (^-ti ( ) J ^ t£-^( ) 1*^ J ^ t£"^ ( ) i<*Jb*I«<if

-UJbiO-t v ( )

^-»ry J (#jJa».Jl) (£)

Uuw» l5J1( ) Ja - .^ .< ^ jd () j u ^ j d ( ) I ju -J lp c5Jd ( ) :«uJb*iJ

o a> o-i V ( )

.o l j ly J l aU£l (a)

l£-^ ( ) ■ ia^yA < j j l ( ) J lp ) 1-^ J l^

-uJWO-l V ( )

j ^ ) •**■■■■* 0 * ' jjJl fl> l ( ) «jl* - i ftl** J ' c L j I f c i J ^ ll i L l » _pl o l : lla :C . j l * i U-i . £

.(Jl>-lj 0* lij> {j* J& \ ^L»l Ju^IaJI

01*15” ^ l* * ( ) o l i L > ^ l*> ( ^

^ a l i J* p ( ) oUL« J p iy ( )

r - j ^ A ) o itU p - i j * p 0 * iy!( )

*£j> ( ) oVUall )

( )

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 205: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

189o l> J d l ^Uai lj f - cjJJU j^Ip J.A7 gj)l i>L>p* l ^l>l ( ) f •£;' j (*•***''

. J»>- J& iJb.lj iU I jL&il y r J . \ j j ~ S l f l '

? < u i d » < W > U L l iJL«y*il oLarL^Ml ‘idj J o l< jidl ^Uii 0«LiS' j 1_A-S” . \

Uu»vi tiJll ( ) du- i j j l ( ) J U <^d( ) : JIS*

V jf -bd“ ( )

Uuw» <^jd ( ) -L -y* t^jd ( ) J lp <j j ± ( ) : J lT jjp

oLdd! jd4| J ol*jLdl ^Uii iJl** (£•)*• d . T

i-a,* c£-d ( ) ^Ip c£»d( * JUi

j j a j V jl -bd* ( )

!.>■»«/» i^-d ( ) kujZA <_£d ( ) J ^ i£ d ( ) t j d i jjp

?«Ujb |l oLLjJI Jtf' o d j ld l ^Uaj jdU 1_S-L« d . r

(Ju^i tsui ( ) Ja- ji* < jd ( ) J lp <j j ±( ) : } y *

jJa^. v jf OiLP ( )

c$d ( ) -I*~>y* c$d ( ) J lp t 5 d ( ) '.j>y j&

0

*od> d l Uai llUPj LS-b* d l dj^P . i

>»*«»«<» t5-d ( ) du-ji* <^d ( ) J lp t i d ( ) j

jJu .N j l j b > t ( )

<^d ( ) c$d ( ) Jlp i^d ( ) jjp

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 206: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

iw— lJ-l] o U j L i i l ^Uai j j l b J j J ^ J l ^t«l ( ) a j L i | £ * ° y { j 9* ' (*—»ill

. J | J m> jJ ^ J a.A>-lj 5j I » | ^»pjl . i.«..U^]l

^Uai jl j J is c jJ b i,; «.lojJl jl^ rl c .« la ’.ol o L«j Jla11 ^U ii . ^

. o l > j IaU

iJbi- c £ - i l ,J*\y( ) <-£-^ J* * > * ( ) ( )

.<uLb_jll ^ j p d i [ a jL j iJ i c £ ^ _jl*il ^Uai . T

IOpp j s S (JJ i ) j!~> <J- > j f l y i ) -i*-'Jr* <-S* J* '>*( ) k- f ^ » l£-^ J& ( )

. i . . i J j^ ) l C-i^Jl ,j<a-lij ,J ! J l C«U j IaII ^Uai . f

(a*- c£Jl jil^*( ) i£ ji jil^»( ) Ai~<y» t£ ji J*ly( ) i£-l! J*\y( ) .£* ( )

.*.i.y>jJl j^ V < - j j I a I I UaJ . i

i^sr p £ ) p £ < -vi ij*\y( ) t5-^ j* '>*() c£-^ ) <y^y ( )

.( J \w Ij i l ) JvJal^ll o L lla li A-Jb ^ o L*_ 1a1I ^ U ii. fl

1.1st- <^.li ) jk^ c£-^ J * ^ ( ) c£-^ J** ( )

. ( j y w l ^ i l ) j J s l ^ l i L i»j a J l j ^1 i£3 l o U ^ aII ^UxJ . *1

1.1st- p £ ,j jh y { ) p £ J » ^ ( ) J *-» ^ l£ -^ Ji^ ( )

.((jowpl^ 1 ) OJLpi ijv—^ o L*_j1a1I ^Oai .V

js^* t5-^ J**y ( ) ^ * y* J*l^*( ) bi*>a J i ( )

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 207: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

o ULjJI 1 ^j lp S jt^ l JLp L«i o L i^ ia il ^Uii .A

U»r js& t£-d jit^*( ) p £ &-d ) Ja-« t£-d J * i ) >-«•■*» i£-d J» l^ ( ) J»'>* ji* ( )

W.Jill jiaJ jjIp JlpL« o b f U a i .

Uar J*\y( ) js£ c£-d (3*' *( ) t5-^ (J*' *( ) »-**■*» ) J*'>* JS* ( )

, ^ i > tto fy i hjJ* ■JLP-- *— ji*il ^Uii. •

l-Lw ji jil »( ) p £ cS-li J » \y ( ) Ja~< t£jl jil «( ) <_£.«.*> i£-d J»l>*( ) ( )

. ole- o>L*jl*>

............................................................................I*. ^

( o l ..................( j.j^ * '1) ••••••• ^ ^^—Al ld> J yjJT a jll . T

^ ' ( ) / H ) : ^ M . r

u j * j & 0 ^ Ji*- ( ) ™ ( ) ^ JlY- ( ) T . ^ J i t ( )

jO~~PrL* ( ) ( ) I j * 1* J i» lp 4j ) o lp ij^ ili ( ^ i* tp 4j j j I i lj* J jl ( i l l i - l . ®

•» J J ^ ( )

J* ^

^U adl J iLOiJ-l 5JLP . Y

io- Y • — \ "\ ( ) io< o — ( ) o lji-i \ • —Y ( ) OjI Ot o— \ ( ) a-l»-lj <i-< Jil ( )

4 i - T 1 j A > 1 ( ) Y » - Y \ ( )

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 208: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

SSJU-I jJ} O t 3JLP qS .A

3L- T . - \ * \ ( ) *i-< \ o - \ \ ( ) o l > - > . - 1 ( ) o l j i - o - \ ( ) iw. Jif ( )

To- T N O

(oly-<)............. ( oL»y*ii 4-Jajti ^

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 209: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

193

Appendix G

Arabic Version of the Management Questionnaire

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 210: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

o l * ^ U s i jjJtl Jju ^Jl J_j*" » jS* f ’d j ^ j» «~*il

- & M J * L*>-s7«p»

V

IJbr j j £ jv jIj

a i r Y \

jafc

V *1 a

• i £ o l j b ^ l j p k ilJ jb l j - J . \

i r y ^

Y 1 0

• S ijb ^ l » j j l£ d l Jai* .Y

i r Y ^

V n 0

.JuU U ll i i jb ^ l oLJLuJl oaUT j w i .T

i r y \

Y \ 0

.3jb ^ ll 4>y& j\ jtjiu . i

i r y \

V *1 a

. ( j o ljb 'y i jfc* jL a J ^ I ~*iyj -a

i r r \

Y 1 a

• lS ol j b^f l £ • J --o ]l jy ~ ^ j 1

i r y ^

Y 1 a

*U «

* r y ^

.flp Sjb^l J **1 *A

v i o i r r ^

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 211: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

.<ulp o l > t ^ j l i l l |» ..ill

^iJijJl j> U . \

.................. (jj^«i)..............^^i-ls^ll ^».«.ll l-i® Jj l «g«<a> JT sjll . Y

^ ' ( ) / * ( ) \ s j M . r

^ ^ J l * - O ^ J i r - ( ) t ^ J i t • o T . ^ j i f o r ^ p T . i

^ O ( ) J-$*'* 3 i *p *0^ ( ) i“*p * i y ^ J * J*' ( ) :4.-.l«:li iJU-i. o

*\jy£ J ( )

Y lftjt ^Ux«]l J} O jji-I O ly~* iJS> • V

40 T • — 1 ( ) 40 ® ~ ^ ( ) o l _ j o • —*\ ( ) o l j o o — ( ) aJi>-l j 4 0 ^ j A J il ( )

i o Y I ^ ^ O T » - T I ( )

?4«J|J«I »jl® ^fl J | 44.X i-l O l ^ O JJLP i ^ . A

40 T • — \ ( ) 40 \ O — ^ ( ) O ljO • — 1 (') OIjO o — ( ) 0-lj-lj 4 0 ^ Jil ( )

i o T 1 ^ > r i ( ) T 0 - T \ ( )

( o l j o ) ................ ( J34~“) ...............^ oL » ji* ll 4»laiL J o u ^JT ijd l ^ ^

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 212: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

1%

Appendix H

English Version of the Cover Letter

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 213: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

197

Pennsylvanian State Universitv-HarrisbureSchool of Public Affairs

777 West Harrisburg Pike Middletown, PA 17057

Tel: (717) 948-6050 Fax: (717) 948-6320

Dear Sir/Madam

How do you know whether the information system in your organization is successful or not? What measures of success do you use to measure that success of your information systems? And are your sure that the measures you use are the most appropriate one?

As my dissertation research, 1 am investigating the issue of how to evaluate information systems in the public sector and thus attempting the answer the above questions. Conducting the study in your organization has been approved by the top management.

The enclosed survey is designed to gather information about the various measures that are used to evaluate information systems success. The collected information will play a major role in developing a comprehensive model for evaluating information systems in the public sector.

Consequently, your participation in this study is essential for its success in developing the comprehensive model. Participation in this study is voluntary. If you decide to participate in this study, please carefully read the instructions in each section and answer all questions without discussing with anyone. There are no right or wrong answers to these questions. Usually, it is your first reaction to a question is a good indication of how you feel. Mark the response that best indicates your reaction, and do not spend too much time on any one item. After completing the survey, please return it back to me. Completing the survey will take 15-25 minutes.

Your answer will be kept strictly confidential. No one other than me will be allowed to have access to your answer. Individual responses will be anonymous. The data will be aggregated and analyzed only on group basis.

The study’s findings will be provided to you upon request. If you have any questions or comments, please feel free to ask me. In advance, thank you for your participation in this study.

Sincerely,Helaiel AlmutairiDoctoral studentE-mail: [email protected]

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 214: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

198

Appendix I

Arabic Version of the Cover Letter

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 215: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

199

o i j^Sfi j * I ijpf

^Jl ^l«ll ^ a l*j ?^»*li j( £***17 wJjjIij (j .>i t.»1' ol*jJUU Uaj jT ouS”

*_. —iS/' |^* 1*1*1 aJIA ji> l» If jU j o it J a jfol*jLJ*l fUii J -Lil

o l* 1*1.1 jjai *.«:! JaI^i* _y> alisj Jjl»-f ^ ili ii*l*Jl ijla^l ^ *1 *dl oLlU~* ,j*

qa *Ap U*ljLl o f J i j*£jjta| ^ o ljjJ t aJA {j sieu .U;LJl iloSfl yjU Jl^lf j oUlaJ*l J

.U*ll J i

A JUt .ol*jJull ^ l^ ^ aaIp ^ O O j 11 <illaU*l jjjUil J_j>- ol»_jl** IJLa

t«Jp *Lj j . i - » o L e l e l L l Ol**jlall jjaj j»~ill j«Jl aUlJ J '»<* IjjJ k-.**l» *—*y~* ol*_jl»il

, IflJA jiiTijl laX 4<*<*ljjJl aJA ^ j i l * j t i

. .. o l -Sfl J • S /l

““I^^/l ilpljA aIa* 11 jLo»^fl u \ji J-A

.v -jk ! O lj^ 'il lJUk J SS'jUll *

. j**i £» ik-tSli idiL* jp £• iijj J i J S h i liM oiiU*j)|\ j ik^Sli 3*iji j»rj! liS'jldli O jy ii| *

.JlJ-* 4 ' i<l |l J^l kill

d U il . J iy* JT fc iji jut* J j^ i dipUoii j j * o ^ lk li.O lwi-*,i|' 4JL±m*S0 id?lP- j i i v v # jlt jJ *i *

. j^ i j Jij-** jlp ±jit

.Misli *ijmJi Ip ikilnJll dlli j utoUt ^ij jT ^*i|i 4»fcf ^jp *brjli *

jSaJ Olgt&A U dJLjt.*.! ,^«Li joA dl) 4*»j1«Jj j i j i l uLUli J 4«J»j jsrji iOlRo '<ji ^1p i<br^li ^ »l si*||i jlp *

,i-.ljjJ l< STjldJJ * j >.» U ily

.SaH a^h J»IP o3^-» o i j l s r ) ! ' *

Jl J*i|l j( JI sma II 1p jjS'jll Oljadl 1p ollw*- JUtjl »Lrjil (CaIjjJi j!Li UjM J i^-jll iJl * J *

.«UAI J j f i l l -*4jjI ^

■ *jl;.;.. fl lOA ^1p AjIac U O* jJI j OJrl |*^jjl*c] ^ J ij* ’ jjA j*^l j? frt t I j >*t

(^jsUl JslA

i ®A1®AV : o

h m a ll6 @ p s u .e d u :^ j> ^ ii Jk j

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.

Page 216: Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations

VITA

AUTHOR: Helaiel Almutairi

COLLEGES ATTENDED AND DEGREE CONFERRED:

Kuwait University, Bachelor of Political Science, June 1988

Carnegie Mellon University, Pittsburgh, Pennsylvania, Master of Public Management, December 1996

The Pennsylvania State University, Middletown, Pennsylvania, Ph.D. in Public Administration, May 2001

PROFESSIONAL EXPERIENCE:

Administrative researcher, Department of Nationality and Passports, Ministry of Inerter, Kuwait, September 1988- July 1994

Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.