INFORMATION TO USERS This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter face, while others may be from any type of computer printer. The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. ProQuest Information and Learning 300 North Zeeb Road. Ann Arbor, Ml 48106-1346 USA 800-521-0600 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
216
Embed
Evaluating information system success in public organizationsEvaluating Information System Success in Public Organizations
Evaluating information system success in public organizations
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI films
the text directly from the original or copy submitted. Thus, som e thesis and
dissertation copies are in typewriter face, while others may be from any type of
computer printer.
The quality of this reproduction is dependent upon the quality of the
copy submitted. Broken or indistinct print, colored or poor quality illustrations
and photographs, print bleedthrough, substandard margins, and improper
alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete manuscript
and there are missing pages, these will be noted. Also, if unauthorized
copyright material had to be removed, a note will indicate the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by
sectioning the original, beginning at the upper left-hand comer and continuing
from left to right in equal sections with small overlaps.
Photographs included in the original manuscript have been reproduced
xerographically in this copy. Higher quality 6” x 9” black and white
photographic prints are available for any photographs or illustrations appearing
in this copy for an additional charge. Contact UMI directly to order.
ProQuest Information and Learning 300 North Zeeb Road. Ann Arbor, Ml 48106-1346 USA
800-521-0600
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
The Pennsylvania State University
The Graduate School
School of Public Affairs
EVALUATING INFORMATION SYSTEM SUCCESS IN
PUBLIC ORGANIZATIONS: A THEORETICAL MODEL AND
EMPIRICAL VALIDATION
A Thesis in
Public Administration
by
Helaiel Almutairi
Submitted in Partial Fulfillment o f the Requirements
for the Degree of
Doctor of Philosophy
May 2001
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
UMI Number: 3014588
Copyright 2001 by
Almutairi, Melaiel M. F.
All rights reserved.
UMIUMI Microform 3014588
Copyright 2001 by Bell & Howell Information and Learning Company. All rights reserved. This microform edition is protected against
unauthorized copying under Title 17, United States Code.
Bell & Howell Information and Learning Company 300 North Zeeb Road
P.O. Box 1346 Ann Arbor, Ml 48106-1346
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
We approve the thesis of Helaiel Almutairi.
Date of Signature
Rupert F. Chisholm Professor of Management Thesis Advisor Chair of CommitteeCoordinator for Graduate Programs in Public Administration
bhdi Khosrowpour (ssociate Professor of Information Systems
J^dbert F. Munzenrider Associate Professor dr Public Administration
fPiarof Public Policy and Administration
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Abstract
Assessing the success of information systems within organizations has been
identified as one of the most critical issues of information system management in both
public and private organizations. In the private sector literature, there are several
conceptual and empirical studies that investigated the issues of how to evaluate
information systems. In the public sector literature, on the other hand, there is a scarcity
in the studies that deal with the evaluation of information systems in public organizations.
This issue is expected to increase in the importance as more usage and investments are
allocated to information systems within public organizations.
The primary purpose of this study is to develop a model that can be used to
measure the success of information systems within public organizations. This study used
the cumulative information research in both public and private organizations to develop
the study model.
In this study, DeLone and McLean’s model was used as the conceptual
foundation for research. This study conceptualized the DeLone and McLean model in
three frames. The outer frame is called the external environment frame, the middle frame
is called the task environment frame, and the inner frame is called the organizational
boundary frame. The DeLone and McLean model proposed that there are six variables
(System Quality, Information Quality, System Usage, User Satisfaction, Individual
Impact, and Organizational Impact) that measure the success of information within the
boundary o f an organization and does not include any external actors in the evaluation
process. A seventh variable, External Environment Satisfaction, was added to the DeLone
and McLean model to denote the satisfaction of external actors.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
In this study, the relationships in the DeLone and McLean model were tested. Six
Kuwaiti public organizations were randomly selected as the study’s sample. A survey
methodology was chosen to collect data. A total of 363 usable questionnaires were
obtained. Factor analysis, correlation analysis, regression analysis, and path analysis were
used to analyze the study’s model.
Initial findings of this study did not support the DeLone and McLean model as it
was originally proposed. The findings indicated that information systems success is a
three variables model. This model proposes that Satisfaction affects Individual Impact
that, in turn, affects Organizational Impact. Also, Satisfaction directly affects
Organizational Impact. Based on the research findings, several implications for public
administration theory and management and future research are stated and proposed in the
conclusion.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
V
TABLE OF CONTENTS
Page
LIST OF FIGURES *
LIST OF TABLES »
LIST OF ABBREVIATIONS «ii
Chapter 1 INTRODUCTION l
Chapter 2 LITERATURE REVIEW 5
External Environment and Information Systems in the Public Sector 6
Studies of Information System Success 13
System Quality 14Measures of System Quality 19
Information Quality 20Measures of Information Quality 22
System Use 23Measures of System Use 27
User Satisfaction 29Measures of User Satisfaction 32
Individual Impact 33Measures of Individual Impact 34
Organizational Impact 35
Measures of Organizational Impact 36
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Integrated Models o f Information System Success 37
Literature Abstract and Assessment 46External Environment and Information Systems in Public Sector 46
Studies of Information System Success 47
Integrated Models of Information System Success 49
Chapter 3 RESEARCH METHODOLOGY 52
Model Formulation 52
Model to be Tested 60Research Question and Hypothesis 61
Operationalization 63System Quality and Information Quality 63
System Use 64
User Satisfaction 64
Individual Impact 64
Organizational Impact 65
Population and Sample 66
Translation and Pilot Study 67
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Data Collection Methods 70
Data Screening and Reliability of Measurement Instruments 74
Limitations of the Study 76
General outline o f plan for Data Analysis 77
Chapter 4 RESEARCH FINDINGS 78
Respondent Characteristics 78Age and Education 79
Gender 79
Length of Government Career and years of service in the current organization 80
Information System Experiences 81
Correlation Analysis 83
Factor Analysis 85Factor Analysis of the Independent variables (SQ, IQ, US, SU) 86
The System Quality scale 87
The Information Quality Scale 89
The System Usage Scale 89
The User Satisfaction Scale 90
Factor Analysis of the Dependent Variables (IM, 01) 95
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
viii
The Individual Impact Scale 96
The Organizational Impact Scale 96
The Implications of Results of Factor Analysis on the Study’s Model 96
Modifying the 3 b Equations 99
Modified Research Question and Hypothesis 100
Scales Reliabilities 100
Second Round of Correlation Analysis 105
Regression Analysis 106First Regression Analysis: Regressing of Individual Impact on Satisfaction 108
Second Regression Analysis: Regressing OrganizationalImpact on Individual Impact 110
Third Regression Analysis: Regressing OrganizationalImpact on Satisfaction and Individual Impact 111
The Implications of Results of the Regression Analysis onthe Study’s Model 112
Path Analysis 114Findings of Path Analysis 114
A Comparison between the Results Produced by Regression Analysis and Path Analysis 120
Researchers in this area argued that external players must be taken into account when
evaluating information systems. Valid measures, however, are in short supply, if they exist
at all. The public information system management literature must mature more quickly to
afford enable public sector managers the necessary instruments to measure their information
systems.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
3
Evaluating information systems is just as important in the private sector (Brancheau
& Wetherbee, 1987; Palvia, Palvia, & Zigli, 1992; Kim & Kim, 1999). However, unlike the
body of PMIS literature, there is no dearth of commentary and literature - either theoretical
or empirical - on evaluating information systems in the private sector workplace (King &
Rodriguez, 1978). The development of research in this area started with an emphasis on
efficiency, using a single measure for success. Most often, this single measure was based on
economic analysis. Researchers, however, shifted their emphasis toward user effectiveness
by focusing on user satisfaction, usage, information quality, system quality, and
organizational impact, although the single measure was still used to measure success.
More recently, however, an increased awareness of the complexity of evaluating
information systems issues has prompted several researchers in this area to question this
approach and to doubt any proposals that single criteria are effective as definitive success
variables (Kanellis & Paul, 1999). Consequently, more pluralistic approaches have started
to appear in this area of research. These approaches are based on the interpretations of case
studies rather than surveys and laboratory experiments (ibid). Several models for evaluating
information systems have emerged from these pluralistic approaches. These models attempt
to capture key dimensions of success and the interaction between these dimensions.
However, these models have not been comprehensive enough to include the external
environment, and have rarely been tested empirically.
The main objective of this study is to develop a comprehensive model to help public
sector managers evaluate their information management systems. Viewed in systems terms,
the model will provide public sector managers with the basic feedback function as well as
provide a necessary component for organizational learning. Literature concerning the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
4
successful implementation of information management systems - in both the private and
public sectors - will be used to develop this model. First, a comprehensive theoretical
model will be proposed, then part of the model in this study will be empirically tested, as a
first step in developing a more comprehensive model. Established measures taken from the
existing literature will be used in operationalizing the model.
This is the first comprehensive study concerning both the internal organizational
variables and external environmental variable to be conducted in the public sector in
Kuwait. Thus, there is a dynamic opportunity to provide critically needed knowledge on the
dimensions of information systems success in the public sector, on the interplay between
these dimensions, and on the relative importance of these various dimensions. This study
will enrich the PMIS literature and help assess the usefulness of existing concepts, models,
and instruments.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Chapter 2
5
LITERATURE REVIEW
This chapter presents three bodies of literature. Section one presents studies that
have investigated the relationship between the external environment and information
systems within public organizations and the implications of this relationship on evaluating
information systems in the public sector. The common denominator of these studies is the
emphasis on the importance of external variables in evaluating information systems in the
public sector.
Section two presents studies that have evaluated information system success. Most
of these studies were conducted in the private sector. These studies investigated and
analyzed different dimensions of information system success and how these dimensions are
related to other organizational variables (e.g., task characteristics, race, user participation,
job satisfaction, etc.). Most of these studies focused on one or two dimensions of evaluating
information system success.
Section three presents studies that have attempted to develop comprehensive models
for evaluating information systems success by integrating the dimensions identified in the
studies in Section two.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
6
External Environment and Information Systems in the Public Sector
Several information system researchers have emphasized the dependency of public
organizations on the external environment. On one hand, this dependency mandates that
public sector organizations design and manage effective information systems to enable these
organizations to collect, store, and disseminate information about their environments -
especially in a highly turbulent environment requiring effective techniques for monitoring
changes in the environment. On the other hand, IS managers in public organizations need to
take this dependency on the environment into account in IS design. The following
paragraphs will present a review of the implications of this dependency on the environment
on the management of information systems within public organizations, especially in the
area of evaluating information systems.
Stevens and McGowan (1985) attempted to develop a framework for public
information systems using a systems and contingency perspective. In their model, the
writers viewed both inside the organization and the external environment as composed of
subsystems. They asserted that the organization is composed of different management
levels (e.g., strategic, mid-level or coordination level, and operational level) and different
functions (e.g., human resource, financial, planning). According to the writers, each of these
functions and levels could be considered a subsystem that has its specific type of
information, decisions, and objectives. Regarding the external environment, the writers
proposed three types of environments. The first type is the operational environment, which
includes the external actors that are highly significant to the public organization, such as
interest groups, legislators, and service recipients. The second type of external environment
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
7
is called the general environment, and includes all of the external actors that operate in the
public organization environment, such as economic variables, technology variables, and
demographic variables. In the third type of external environment (the remote environment)
the writers included intangible factors that a public organization managers deals with when
he performs his functions, such as uncertainty, complexity, and threats.
The role the information system plays in the public sector organization is greatly
influenced by these external and internal subsystems. According to the researchers, when
public organization managers do strategic planning, they must take into account the
expectations of major outside and inside interests. One approach is to develop a database
that incorporates these expectations.
Another example is that in the operational environment there are legislative,
executive, judicial, and financial/budgetary controllers who impose certain authority and
financial standards on public organizations. For example, often, public organizations are
obliged to follow several legislative statutes (e.g., paper reduction acts) intended to improve
the internal operation of these organizations. In response to these standards, public
organization information structures should be able to generate relevant information for both
external reporting and internal control.
Stevens and McGowan (1985) identified several criteria to use to evaluate
information systems in public organizations: 1) accuracy and applicability of information
provided to managers and users, 2) timeliness of information, 3) User Satisfaction, and 4)
acceptance by managers and users. These researchers also proposed that these criteria could
be applied to the internal, operational, and control objectives, as well as the analysis of the
environmental influences that may directly affect internal organizational functions (p. 141).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
8
In other words, these criteria could be used to assess the success of information system from
the perspectives of both the internal and external users.
Bozeman and Bretschneider (1986) also attempted to develop a model for the Public
Management Information System (PMIS). These researchers strongly believed that external
factors, or what they called the distal environment (e.g., political and economic authorities),
influenced the internal factors in an organization, or what they called the proximate
environment; which include variables that are related to the work context and the attitudes
and behaviors of individuals in an organization. According to the researchers, this strong
external influence on the internal factors is what makes information systems within public
organizations different from those in private organizations.
Consequently, the researchers argued that MIS performance measures should reflect
the unique characteristics of public organizations. According to the researchers in both the
public and private sectors, accountability is important; however, this concept in public
organizations has greater importance as a result of the nature of the distal environment. For
example, public organization managers are more accountable to individuals and groups
outside the organization. Consequently, measurements of performance of information
systems should reflect the system’s ability to
...handle special queries that aggregate data in unanticipated ways, and produce special reports and analysis. These non-routine forms of analysis will have extremely short time frames, thus adding the dimension of timeliness to the measurement of accountability (Bozeman & Bretschneider,1986, p. 482).
Furthermore, the researchers added that timely responses to external requests for data
are concerns when evaluating information systems in public organizations at the
environmental level. According to the researchers, during budget cycles, external political
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
9
players such as executive branch agencies and legislatures require data that enable these
external actors to evaluate public organizations. These researchers argued that “the degree
to which an organization responds to external data requests in a timely fashion with
appropriate and accurate data can have either positive or negative effects on MIS within the
organization” (Bozeman & Bretschneider, 1986, p. 482).
In an empirical study, Bugler and Bretschneider (1993) studied the adoption of
information systems in public organizations and found that there is a relationship between
external relationships and the adoption of information systems. Organizations that have
closer external relationships are found to have a higher interest in adopting information
systems for the purpose of improving these relationships.
In another empirical study, Bretschneider (1990) tested the following hypotheses:
(1) Public Management Information System managers must contend with a greater level of
interdependency across organizational boundaries than do private MIS managers, and (2)
Public Management Information Systems planning is more concerned with extra-
organizational linkages, while private MIS is more concerned with internal coordination.
After testing these propositions, Bretschneider (1990) concluded
The environment of PMIS differs from that of its private sector counterpart.The difference is primarily in the form of greater interdependencies, leading, at least in part, to increased accountability, procedural delays, and red tape. Secondly, within these more constrained environments, traditional MIS prescriptions are not automatically adopted. This suggests that the environment o f public organizations has led to adaptation of standard management practices. In other words, the organizational environment affects or tailors the nature of management action (p. 543).
Rocheleau (1999) reviewed several cases of information system implementation
projects in several public organizations and concluded that “political factors are often the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
10
most crucial in determining how successful information technology is” (p. 23). Rocheleau
recommended that “Managers [of information systems] will often have to be involved in
exerting political influence and engage systems outside their direct control in order to assure
a successful outcome” (p. 31). Including outside representatives in the evaluation process is
one form of engaging outside systems.
In studying the adoption of microcomputers in both private and public sectors,
Bretschneider and Wittmer (1993) found that organizational environment (i.e., greater levels
of interdependence across organizational boundaries and higher levels of red tape) and task
environment (i.e., the nature and characteristics of tasks) play major roles in innovation and
adoption of information technology. Thus, these researchers strongly recommended taking
into account the nature of these environments in the management of information systems.
In an empirical study, Mansour and Watson (1980) tested the applicability of the
private sector computer-based information system models in the public sector. The model
tested was:
CBIS performance = /(Computer hardware and software, behavior,structural, and environmental variables)
Under each category, there were several specific variables. Under CBIS
Performance, there were applications’ performance, the degree of integration in the
database, the decision function provided by decision models, the organizational levels
served, and the interfaces between system elements. The Behavior category included
degree of top management involvement in systems development, the effectiveness of
relationships between computer specialists and other organizational personnel, the amount
of resistance to change by organizational personnel, and the quality and quantity of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
computer specialists. Under the Structural category, there were the organizational
placement of the data processing function, the frequency with which educational programs
are offered to end users, the availability of interactive computing, and the length of time the
organization has operated a CBIS. Finally, the Environment category incorporated the
amount of competition the organization faces in the marketplace, the variety of products or
services offered by the organization, the frequency with which the organization offers new
products or services, the amount of customer requirements, and the amount of external
regulation.
The variables for each category were selected based on the outcome of a
comprehensive survey of the literature, which identified a list of variables in each category.
Second, a panel of IS experts reviewed the list. Variables were included in the final list
based on the weights that these experts assigned to the variables. The final list of variables
was tested on both private and public organizations, although the Environmental variables
were excluded in the public organization case. The researchers argued that this is due to
“lack of relevance [of the environmental variables] for governmental organizations, given
the way the environmental variables were defined. Governmental organizations function in
an environment that is much different from thai faced by private business organizations” (p.
525). According to these researchers, even among government organizations, there are
considerable differences in the external environment. Consequently, Mansour and Watson
(1980) proposed that
In order to explore fully the impact o f environmental variables on CBIS performance in governmental organizations, it would be necessary to categorize the different types o f governmental organizations, develop appropriate environmental variables for each category, and collect data from organizations in the different categories (p. 526).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
12
The researchers did not undertake this effort, but it is certainly a possible area for
future research.
Newcomer (1991) argued that users of information systems in public organizations
are not only the members of the organization, but also users that exist in the extended
environment such as legislative, central management and oversight agencies, program
clients, other governmental agencies, suppliers, and media. Thus, Newcomer argued that
these users should be taken into account when evaluating information system.
Moreover, Newcomer proposed specific information system success indicators in
public organizations. These indicators included usefulness and reliability, ease of use, error-
resistant operations, authorized-use controls, protected system and operations, time savings,
system economic payoff or cost result, user acceptance, and contextual considerations
(which includes, among other things, the unique nature of public-sector access and
accountability). Regarding the last indicator, Newcomer (1991) stated, “Public-sector
information system evaluation must consider how well information systems meet numerous
legislative requirements” (p. 383).
Bozeman and Straussman (1990) also suggested taking into account the external
environment in evaluating information systems in public organizations. The researchers
stated
Public officials’ satisfaction (a surrogate for citizens’ satisfaction) with the final set of goods and services is one measure of PMIS...Such measures are important indicators of technological success of PMIS (p. 123).
In summary, the studies reviewed in this section of the literature review indicate that
there is close interdependency between information systems in public organizations and the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
13
external environment. One implication of this interdependency is the extension of the
evaluation of information system process to include actors in the external environment that
can influence information systems.
Studies of Information System Success
A large number of studies have evaluated information systems in private
organizations. Most of these studies have attempted either to identify factors that influence
the success of the information system, or investigate how to measure information system
success (Glorfeld, 1994). Generally, most of these studies have focused on internal users
and impacts of information systems without taking into account external users and their
impacts on these systems.
In a different approach from the above approaches, DeLone and McLean (1992)
focused on the dependent variable that is information system success. The researchers noted
that there are a large number of studies that have attempted to identify factors contributing to
information system success. These researchers also noted that one of the weaknesses of
these studies is the failure to clearly identify the dependent variable. Consequently, the
researchers organized the literature that was concerned with information system success into
a comprehensive taxonomy for the purpose of giving a more complete view of the
information system success issue. The taxonomy combined four traditional dimensions of
information system success - system quality, information quality, use, and user satisfaction
- with two other dimensions - individual impact and organizational impact. Second, the
researchers developed a comprehensive model for information system success that took into
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
14
account all six dimensions of information system success and the relationships among these
dimensions (Figurel). Although, DeLone and McLean (1992) argued that contingency
variables such as the environment of the organization being studied should be taken into
account, these variables were not a main dimension of their model.
In the private sector information system literature, DeLone and McLean’s (1992)
taxonomy was described being comprehensive enough to take into account all dimensions of
information systems success (Seddon, 1997; Ballantine et al., 1996). As such, in the
following subsections, DeLone and McLean’s taxonomy will be used to organize findings of
studies that investigated information system success in private organizations. The review
will focus on two things: (1) identifying key variables and relationships among them, and 2)
how the variables were operationalized and measured.
System Quality
Studies examining system quality used features of the systems themselves to assess
quality. Some studies evaluated information systems by investigating how information
systems utilized organizational resources such as materials and financial resources. For
example, Kriebel and Raviv (1980,1982) used microeconomics to develop and test a
mathematical model for evaluating the efficiency of computer services supply in
organizations. They attempted to model the input resources required and the output
products or services provided by the information system department.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
SystemQuality
IndividualImpact
InformationQuality
UserSatisfaction
Figure 1. DeLone and McLean model o f information system success Source:DeLone and McLean, 1992.
OrganizationalImpact
In the same vein, Conklin, Gotterer, and Rickman (1982) studied the impact of
background jobs on response times. In this study, the terminal response time was defined as
the interval from the time the operator depressed the transmit key until the response
character appeared on the screen, using a stopwatch to measure the response time. Since
Conklin and colleagues found that longer response time related to decreased user
satisfaction with the system, this study supports the importance of the user’s perception of
system quality.
Using a different approach, a number of studies evaluated the quality of information
systems by examining the organizational effectiveness (i.e., how well the users of the system
are accomplishing their organizational goals) and identifying factors that should exist in an
organization in order to ensure a high quality information system. For example, several
researchers have examined the relationship between user participation in the development of
information systems and system quality (Glorfeld, 1994). Edstrom (1977) investigated the
relationship between users’ influence in the different phases of the system development
process and information system success and found that there is a positive relationship
between users’ influence in the initiation phase and the perceived success of the system.
The participants in this study were asked to rate the implemented information system on a 7-
point Likert-type scale from complete failure to complete success.
Franz and Robey (1986) investigated the relationship between user involvement in
information system development and perceived system usefulness. The study was
conducted on 118 user managers from 34 companies. The researchers found that greater
user involvement in all information system development stages is related to greater
perceived usefulness (surrogate measure of system quality). In the same vein, Kaiser and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
17
Srinivasan (1980) used the perceived worth of the information system as a measure of
system quality. The researchers found that there is a relationship between user involvement
and group process skills, such as the ability to adapt to change, communication skills, level
of conflict and agreement, and information technology effectiveness. The researchers stated
“clearly, user involvement with the activities of the system leads to higher measures of
perceived worth of the system” (p. 202).
In an experimental study, King and Rodriguez (1981) investigated the relationship
between participation and the users’ perception of the worth of the system (surrogate
measure of system quality). The researchers found support for the relationship between
participation and perceived worth of the system, but that participation did not lead to an
increase in system usage.
Similarly, Tait and Vessey (1988) investigated the relationship between user
involvement and system success. System success was measured using and instrument
developed by Bailey and Pearson (1983), which included several items that measured
system quality. Although the researchers did not find any support for this relationship, they
found that system complexity, time, and financial resource constraints have strong direct and
indirect effects on system success through user involvement.
The interest in the relationship between user involvement and information success
led Torkzadeh and Doll (1994) to develop a measurement for user involvement. The
researchers assessed the short-range and long-range stability of the items that measure
perceived involvement, desired involvement, and involvement congruence using the test-
retest method. The researchers concluded that the instruments are internally consistent,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
18
stable, and should be used with confidence in user involvement research without concern
about a reactivity effect.
Goslar (1986) investigated the usefulness of several decision support system features
(used as surrogate measure of system quality) for marketing problem solving. Features
examined in this study were the interrogation (e.g., what-if analysis, impact analysis,
sensitivity analysis), computation (e.g., standard arithmetic calculation, complex
polynomial fit), range analysis (e.g., normal distribution, uniform distribution, general
cumulative distribution), and simulation analysis. Goslar found that interrogation features,
computational features, and forecasting models were considered most useful by DSS users,
while range analysis features were considered the least useful.
Davis (1989), in several empirical studies, found that perceived usefulness (the
effects of the system on work) and perceived ease of use (whether easy to use and interact
with system), which are two surrogates of system quality- are associated with system
acceptance (current and future usage), with correlation coefficients ranging from.45 and.85
respectively. Davis also found that usefulness and perceived ease of use are significantly
correlated with each other (r =.69).
In the context of testing the technological acceptance model, Karahanna and Straub
(1999) found that usefulness (the belief that an information system is useful in job
performance) is affected by perceived ease of use (the extent that an information system is
friendly).
Yuthas and Young (1998) conducted a study to test whether user satisfaction and
system usage are appropriate indicators of decision-making effectiveness (system quality).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
19
System usage was defined as the extent and nature of use of information system.
Satisfaction was defined as extent of improvement in decision-making outcomes. Yuthas
and Young concluded that user satisfaction and system usage measures are not acceptable
alternatives to direct performance measurement.
Measures o f System Quality
Researchers have used many surrogate measures for system quality, ranging from
single-item scales to multi-item measurements. For example, Barki and Huff (1985) used a
single semantic differential item to measure overall user satisfaction regarding decision
support systems. Similarly, Edstrom (1977) measured the success of information system
through one question by which users rated the implemented system. The multi-item
instruments measured system quality through perceived value or worth, usefulness, and
perceived ease of use. For example, Davis (1989) developed and validated two
measurements for perceived usefulness and perceived ease of use. Each instrument consists
of six items.
Bailey and Pearson (1983) developed and validated instruments to measure general
user satisfaction. Seven items from this instrument were assigned to measure system
quality. This instrument has been validated by several researchers (Ives, 1983; Baroudi &
Orlikowski, 1988; Iivari & Ervasti, 1994; Mahmood & Becker, 1985,1986) and has become
a standardized measure in the MIS field.
Doll and Torkzadeh (1988) developed an instrument to measure end user computing
satisfaction (EUCS). The instrument merged items that measure the quality of information
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
20
(content, format, and timeliness) with items that measure the quality of the system
(accuracy, ease of use). In the EUCS, there are 13 items, four of which were designed to
measure system quality (ease of use and accuracy). Torkzadeh and Doll (1991) and
Hendrickson, Glorfeld, and Cronan (1994) validated this instrument. Hendrickson and
colleagues conducted their study on public organizations and found that the EUCS measure
is valid and stable over time.
Information Quality
Researchers studying the information quality dimension have examined information
system output (i.e., information quality from users’ perspective), and how several
organizational variables are related to Information Quality. Gallagher (1974) studied the
value of MIS in a medium-sized company using estimated annual dollar values and semantic
differential technique as two measures of perceived value of information (see next section
for more detail). Gallagher found a positive relationship between the perceived value of
information and participation in the design of the system and managerial position. Users
who participated in the design of their information systems evaluated the output of those
systems more favorably than users who did not. The researchers also found that managers
in upper-level managerial positions value MIS reports more highly than those lower in the
hierarchy.
Iivari and Koskela’s (1987) overview of the PIOCO model made a connection
between information system design and information quality. The PIOCO is composed of
three sub-models: P model is defined as “restricted, planned change in the host
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
21
system/organization” (p. 406). The second model is I/O, which presents the information
system from the viewpoint of the user. The third model is C/O which determines the
internal structure and action of an information system. This study is relevant to the P model,
which takes into account the viewpoints of external users of the information system, such as
interest groups. However, the researchers did not provide the means to measure the effect of
these external players on the quality of information systems. Iivari and Koskela (1987)
justify this by stating
It is difficult to provide effectiveness criteria (schemas) of wide applicability.Due to the diversity of potential effects, the principle of many points of view should be applied to their identification reflecting the various interests involved and taking into account not only the economic effects...but also various social, technical, and managerial effects (p. 414-415).
In the same vein, Mahmood and Medewitz (1985) investigated the relationship
between the selection of a DSS design method and its ultimate success. DSS success was
measured through DSS usage, user satisfaction, and user attitude and perception criteria.
Data was collected from managers, intermediaries, and designers. Among the most highly
rated DSS successes were several items that related to information quality such as accuracy
of DSS reports, useful output reports, and better types of output reports. Consequently, this
study notes the connection between system design and information quality.
Blaylock and Rees (1984) tested the relationship between a decision-maker’s
cognitive style and the output of information system-information. The researchers used
Larcker and Lessig’s (1980) questionnaire measuring usefulness of information by
examining two components: importance of information, and usefulness of information. The
first term is defined as the “quality that causes a particular information set to acquire
relevance to the decision maker” (p. 123). Usefulness is defined as the “information quality
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
22
that allows a decision maker to utilize the [information] set as an input for problem solution”
(p. 123). The researchers found a strong correlation between cognitive style and usefulness
of information.
In an exploratory field study of five senior executives, Jones and McLeod (1986)
examined where and how senior executives get their decision-making information. The
study’s findings indicated that executives obtain a great deal of information from both the
environment and from informal information sources, and that formal computer-based
information systems do not seem to provide much information directly to the executive.
These researchers have suggested that “executive information systems be conceptualized for
design in the broadest terms possible to include internal and external information sources,
personal and impersonal sources, and a broad spectrum of media (meetings, computer and
non-computer reports, telephone, etc.) that vary in information richness” (p. 244). This
study showed how the external sources of information are important and related to the
quality of information used by an organization’s members.
Measures of Information Quality
Like the preceding dimension, researchers have used many surrogate measures for
information quality. For example, Bailey and Pearson (1983) developed a user satisfaction
instrument, which included nine items that measure information quality: accuracy,
timeliness, precision, reliability, currency, completeness, format of the output, volume of
output, and relevancy. This instrument has been validated by several researchers (Ives et al.,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1985/1986), and has become a standardized measure in the MIS field.
Gallagher’s (1974) multi-item measurement assessed information quality by utilizing
two measures of perceived value: an estimated dollar value in response to the following
question:
Assume that your company plans to eliminate all data processing and to obtain this report from another firm on an annual subscription basis. What is the maximum amount you would recommend paying for this report for you? (Gallagher, 1974, p. 48)
The second was a set o f fifteen 7-point semantic differential bipolar adjective pairs
to which the respondent was asked to indicate his opinion of the report. The 7-point scale
ranged from -3 (extremely unfavorable) to +3 (extremely favorable). The score on this
measure of perceived value is the average of responses to all 15 adjective pairs.
Doll and Torkzadeh’s (1988) EUCS instrument included eight items that assessed
information quality. The eight items measured information quality through its content,
format, and timeliness. Each item was scored on a 5-point Likert-type scale.
System Use
The use of an information system, or information system report or output, is one of
the most frequently reported measures of the success of an information system (DeLone and
McLean, 1992). A number of conceptual studies proposed information use as the a measure
oi' information system success. For example, Ein-Dor and Segev (1978) attempted to
identify the organizational context variables affecting the success and failure of MIS.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
24
Organizational context variables were categorized as uncontrollable (e.g., size, structure,
time frame, extra-organizational situation), partially controllable (e.g., resources, maturity,
psychological climate), and fully-controlled (e.g., responsible executive, steering
committee). System usage was chosen in the study as the measure o f information system
success. The writers asserted that usage was identified as a measure for information system
success is because usage is correlated with at least some of the other criteria used in the
literature to measure success (e.g., profitability, application to major problem of the
organization, quality of decisions and performance, and user satisfaction). In Ein-Dor and
Segev’s words, “these criteria are clearly mutually dependent...we claim that a manager will
use a system intensively only if it meets at least some of the other criteria, and that use is
highly correlated with them” (p. 1065).
Similarly, Hamilton and Chervany (1981) provided a conceptual hierarchy of system
objectives that needed to be considered in evaluating information systems. In this
conceptual hierarchy, the writers combined two perspectives: the efficiency perspective
(how efficiently MIS development and operations utilize assigned resources to provide the
information system to the user) and the effectiveness perspective (the effectiveness of the
user or the organizational units in using the information system in accomplishing their
organizational mission).
Both the efficiency and effectiveness perspectives have certain objectives. The
efficiency perspective’s objectives are the requirements definition for the information, the
resources consumed to provide the information system, the production capability or capacity
of the resources, and the level of investment in resources. The effectiveness perspective’s
objectives are the information provided by the information system and the support provided
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
25
by the MIS function to users of the system, the use of the information system and its effect
on user organizational processes and performance, and the effect of the information system
on organizational performance. Hamilton and Chervany argued that within each type of
objective there is interdependence among the objectives. In others words, each objective
affects the objective that follows. The linkage between the objectives of the two
perspectives, according to the writers, takes place when the organizational performance
objective (effectiveness perspective) affects the environment, which, in turn, affects the
resource investment objective (efficiency perspective). The writers argued that system
usage could be a measure of information system effectiveness because effects on
organizational objectives and performance “do not follow directly and immediately, but
rather result from use of the information system” (Hamilton & Chervany, 1981, p. 58).
Hamilton and Chervany (1981) made another interesting recommendation to extend
the evaluation of the information system process to include not only the primary user of the
information system but also other people involved in the achievement of information system
objectives, both from the efficiency and effectiveness perspectives.
A number of empirical studies have been conducted to investigate the relationship
between information system usage and other organizational variables. For example, King
and Rodriguez (1978) investigated the relationship between user involvement and system
usage. Their experimental study was conducted with managers enrolled in a part-time MBA
program who had completed virtually all of the program requirements. The researchers did
not find a relationship between user involvement and system usage.
In the same vein, Kim and Lee (1986) investigated the relationship between user
participation and degree of MIS usage. They proposed a four-dimensional model for this
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
26
relationship: participation characteristics, system characteristics, system initiator, and the
system development environment (includes top management support and overall user
attitudes). There were no external variables included in this model. The researchers found a
relationship between user participation and system usage. Lucas (1975b) investigated the
relationship between decision style, situational and personal factors, attitudes toward
computers, and system usage. The situational variables included in this study were high
transitional customer base, hub office competition, heavy competition, and light
competition. Lucas argued that information system usage is positively related to decision
style, situational and personal factors and attitudes toward computers. Moreover, Lucas
found that positive attitudes toward computers, perceived high-level management support,
and computer potential could be used to predict high levels of information system usage.
Regarding situational factors, Lucas (1975b) concluded
Clearly situational...factors need to be considered in designing accounting and other information systems; the nature of the relationship among these variables will probably be unique and dependent on each organization and its environment (p. 745).
Ein-Dor, Segev, and Steinfeld (1981) tested three proposals related to system usage
and three measures of information system profitability. The three measures of profitability
are actual costs relative to budgeted costs, subjective evaluation of relative resource
requirements, and subjective evaluation of cost savings. The three proposals supported in
this study are
1. the use of an IS increases when it is perceived as profitable and
decreases when it is perceived as not profitable;
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
27
2. the greater the contribution to improve decisions or performance, the
greater the use of IS; and the lower contribution to improve decisions
or performance, the lower the level of use; and
3. the more satisfied users are with an IS, the greater the use; and the
less their satisfaction, the lower the level of use.
Karahanna and Straub (1999) studied 100 e-mail system users and found that
system use is affected by the medium’s usefulness, which is affected by perceptions of the
ease of use. LISREL 7 was used to analyze the relationships between the variables. The
goodness of fit index for the model of these relationships was .96. In this study, usefulness
is defined as the belief that an information system is useful in the job, while the ease of use
is defined as the extent to which an information system is friendly.
Baroudi et al. (1986) gave empirical evidence that system usage and user satisfaction
are linked. The researchers noted “user information satisfaction is an attitude toward the
information system, while system usage is a behavior” (p. 234). The study provided
evidence that user satisfaction is related to greater system usage (r =.28), although the study
did not identify the direction of this relationship:
Satisfaction -> Usage versus Usage-> Satisfaction
Measures of System Use
Researchers have used a variety of instruments to measure information use. These
instruments range from actual behavior (e.g., Schewe, 1976), documented usage (e.g., Ein-
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
28
Dor, Segev, & Steinfeld, 1981), to self-reported perception of past usage (e.g., Lucas,
1975c).
Kim and Lee’s (1986) study developed a measure of usage that took into account the
voluntary aspect of the usage. Kim and Lee’s measurements took into account the
frequency of the use and the voluntariness of use. Each was measured on a single item, 7-
point Likert-type scale from 1 (much less frequent use) to 7 (very frequent use). The scale
associated with voluntariness was anchored by 1 (completely mandatory use) and 7
(completely voluntary use). To compute the system usage index, the responses to the two
items are multiplied (thus, the range is from 1 to 49) and the square root of the product is
taken for the purpose of normalizing the scale.
Building on Igbaria (1992) and Igbaria, Pavri, and Huff (1989) and Anakwe,
Anandaeajan, & Igbaria (1998) measured usage through four indicators; which are actual
daily use of the computer, frequency of use, number of packages used by participants, and
number of tasks the system is used for. Their study was conducted on nine organizations in
Nigeria.
Doll and Torkzadeh (1998) developed a multidimensional measure of how
extensively information technology is utilized in an organizational context for decision
support, work integration, and customer service functions. The instrument consists o f 74
items, 62 of which measured System Use, while 12 items measured the impact of IT on
work. Using a pilot sample of 89 usable interviews, the two researchers validated the
instrument.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
29
User Satisfaction
User satisfaction is the measure of the successful interaction between the information
system itself and its users (Glorfeld, 1994). DeLone and McLean (1992) argued that user
satisfaction has been widely used for the following reasons:
First, ‘satisfaction’ has a high degree of face validity. It is hard to deny the success of a system, which its users say they like. Second, the development of the Bailey and Pearson instrument and its derivatives has provided a reliable tool for measuring satisfaction and for making comparisons among studies. The third reason for the appeal of satisfaction as a success measure is that most of the other measures are so poor; that are either conceptually weak or empirically difficult to obtain (p. 69).
Many researchers have studied user satisfaction and how it is related to other
variables. For example, Mahmood and Becker (1985/1986) tested the relationship between
end users’ satisfaction and organizational maturity of information system. User satisfaction
was measured using Pearson’s instrument. The organizational maturity of the information
system was measured using Nolan’s stage model. Nolan’s model consists of six stages
(initiation, contagion, control, integration, administration, and maturity). Under each stage,
there are several variables that distinguish this stage. For example, among the variables that
distinguish the maturity stage in the area of data processing expenditure is tracks rate of
sales growth and in the area of applications portfolio is application integration mirroring
information follows. The researchers found a weak direct correlation relationship between
variables in the maturity stage and the level of User Satisfaction.
Ginzberg (1981) investigated the relationship between users’ expectations and users’
satisfaction. A single item that measures the overall satisfaction with the information
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
30
system measured user satisfaction. The study’s findings indicated that users who maintain
realistic expectations prior to implementation were more satisfied with the system and used
the system more than users whose pre-implementation expectations were unrealistic.
Lu and Wang (1997) tested the relationship between user satisfaction, management
styles, and user participation. The study was conducted on IS managers who work in
companies in Taiwan. The researchers found that user participation is not always
significantly correlated with User Satisfaction. Regarding management styles, the
researchers found that management style should be adapted to the IS stage. At the initiation
stage, people-oriented management style has a connection with user involvement, but not
with User Satisfaction. At the development, both people-oriented and task-oriented styles
are related to user participation and user satisfaction. At the maturity stage, management
styles have no connection to user involvement, but have significant correlation with user
satisfaction.
Woodroof and Kasper (1998) integrated three organizational behavior theories of
motivation (equity, expectancy, and needs) with user satisfaction. Their argument is based
on the notion that the satisfaction construct is different from the dissatisfaction construct and
that the process of an information system is not like the outcome of an information system.
Accordingly, the writers proposed including four variables in the DeLone and McLean
model: process user dissatisfaction, outcome user dissatisfaction, process user satisfaction,
and outcome user satisfaction. These four variables, according to the writers, affect
separately and jointly usage and satisfaction in DeLone and McLean model.
Baroudi and colleagues (1986) tested the relationship between user satisfaction and
System Usage. User satisfaction was measured through the use of Bailey and Pearson
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
31
instrument. The researchers found a positive relationship between the two variables(r=.28);
however, the causal ordering of this relationship could not be identified.
Khalil and Elkordy (1999) investigated the relationship between user satisfaction and
systems usage using a sample of Egyptian banks. To measure user satisfaction, the
researchers used the short version of the User Satisfaction instrument originally developed
by Bailey and Pearson (1983). The researchers tested the reliability of this instrument. The
overall reliability coefficient of instrument was 0.82. This meant that the total score of the
instrument is reliable as a measure of the level of user satisfaction. Moreover, the reliability
coefficients for each of the basic elements in the instrument were calculated using factor
analysis. The reliability coefficients were: relationship with IS staff and systems (0.81),
quality of systems output (0.64), and user’s understanding of systems and user’s
involvement in systems development (0.67). Regarding the relationship between user
satisfaction and usage, the researchers found a positive correlation between the two concepts
(r = .36).
While some studies did identify a positive relationship between usage and user
satisfaction, several studies did not find such relationship (e.g., Schewe, 1976; Cheney &
Dickson, 1982; Srinivasan, 1985). Kim, Suh, and Lee (1998) argued that contingency
variables (task variability and task analyzability) have an effect on usage and a moderating
effect on the relationship between usage and user satisfaction. An empirical study
conducted on several companies in Korea was used to give evidence for the effect of the
contingency variables. In this study, User Satisfaction was measures through six items
adapted from Maish (1979), Ginzberg (1981), Sanders (1984), and Lee and Kim (1992).
The Cronbach’s alpha for the six items was 0.874.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
32
In an exploratory study, Ang and Soh (1997) examined the relationships between
user satisfaction, job satisfaction, and users’ computer background. The researcher found
that user information satisfaction (UIS) provides a sound indication of job satisfaction;
however, there was no relationship between UIS and computer background.
Palvia and Palvia (1999) investigated the variables that influence user satisfaction in
small businesses. The variables the researchers tested were gender, age, race, education, and
computing skills. Among these variables, gender and age were the only variables that had
significant association with user satisfaction.
Measures of User Satisfaction
User satisfaction is considered one of the most usable measures of information
success. Recently, however, some scholars argued that user satisfaction is not enough to
measure IS success. For example, DeLone (1990, p. 88) stated
User satisfaction alone is not sufficient to adequately capture the full meaning of effectiveness. For one thing, it fails to consider the role user behavior plays in the transformation of inputs to outputs. While IS managers may be interested in effect, senior management and stockholders are likely to be more interested in the performance of the human-computer system as it relates to IS investment and operating expenditures.
The popularity of user satisfaction as a measure of information success has led
researchers to operationalize this dimension in many ways. For example, Ginzberg (1981)
used a single item to assess overall user satisfaction, asking: “All in all, how satisfied are
you with the system?”
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
33
In a different approach, other researchers have developed multi-item instruments to
assess user satisfaction. For example, Bailey and Pearson’s (1983) instrument focused on
general user satisfaction. The instrument included 14 items that focused on users’
perceptions of the success of the IS. This instrument has also been reduced to eight items
and revalidated by several other researchers (Ives et al., 1983; Baroudi & Orlikowski, 1988;
Iivari & Ervasti, 1994; Mahmood & Becker, 1985/1986). Iivari and Ervasti conducted a
study on one municipal organization with 8000 employees (Oulu City Council). They found
that the user information satisfaction instrument was valid and reliable.
In the same vein, Doll and Torkzadeh (1988) merged ease of use and information
product items to measure the satisfaction of users who directly interact with the computer
using specific applications. Torkzadeh and Doll (1991) and Hendrickson, Glorfeld, and
Cronan (1994) have validated this instrument.
Individual Impact
Individual impact refers to the effect of information on the behavior of the recipient
of the information (DeLone & McLean, 1991). DeLone and McLean indicated that
performance of users of information system and individual impact are closely related.
Improving performance indicates that the information system has a positive impact.
Millman and Hartwick (1987) found that office automation has led to positive effects
on the workplace. The 75 managers utilized for the sample reported that automation led to
improving their effectiveness, as well as the effectiveness of their organization. Similarly,
Bikson, Stasz, and Mankin (1985) studied the impact of automation on individuals’ work.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
34
These researchers found that the majority of people employed in automated offices felt that
information systems enriched their work.
Marcolin, Munro, and Campbell (1997) investigated the relationships among job
characteristics (feedback, autonomy, task identity, and skill variety), individual traits
(computer anxiety and locus of control), individual beliefs surrounding technology usage
(perceived relative advantage and perceived ease of use), and user ability to employ
information systems. The findings indicated that skill variety, computer anxiety, and
relative advantage of information systems were important in identifying users with higher
and lower abilities. The regression coefficients of these variables ranged from .10 to -.47.
Igbaria and Tan (1997) investigated the implications and consequences of IT
acceptance by examining the relationship between IT acceptance and its impact on
individual users. The research model involved three components: user satisfaction, system
usage, and individual impact. The findings indicated that user satisfaction is an important
factor affecting system usage, and that user satisfaction has the strongest direct effect on
individual impact.
Measures of Individual Impact
Individual impact has been measured in various ways, including decision
estimate value of the information system (Cerullo, 1980), and estimated dollar value of the
information received (Gallagher, 1974; Keen, 1981).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Millman and Hartwick (1987) used a questionnaire to assess the impact of office
automation on middle management. Managers were asked whether office automation has
increased, decreased, or had no effect on 15 different aspects of these managers’ job and
work (e.g., importance of job, amount of work required on the job, accuracy demand on the
job, skill needed on the job, interesting job).
Doll and Torkzadeh (1998) used 12 items as part of their multidimensional measure
to test the impact of IT on task productivity, task innovation, customer satisfaction, and
management control. Torkzadeh and Doll (1999) further validated the same 12 items for the
purpose of developing an instrument for measuring the impact of information technology on
work. The reliability scores were 0.93,0.95,0.96, and 0.93 for task productivity, task
innovation, customer satisfaction and management control, respectively. The overall
reliability for the instrument was 0.92.
Organizational Impact
Organizational impact refers to the influence of information systems on the overall
organizational performance. Unlike other dimensions, a number of authors who have
studied this dimension extended the measurement of the effects of information systems to
include not only the effects on such internal organizational variables as the effectiveness of
decision making (Lucas, 1981) but on the effects on variables outside the organizational
boundaries, such as relationships with suppliers (Mahmood & Soon, 1991).
Cron and Sobol (1983) and Bender (1986) are examples of researchers who have
focused on the internal effects o f information systems. These researchers focused on the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
36
overall organizational expenses and how they related to information systems. The
researchers found that companies that lease information systems tended to have higher
organizational expenses.
Mahmood and Soon (1991) attempted to develop a comprehensive model to measure
the effects of information system on organizations through integrating several internal
organizational variables and external variables. The variables included in the model were:
new entrants, entry barriers, buyers and consumers, competitive rivalry, suppliers, search
cost and switching costs, products and services, economics of production, internal
organizational efficiency, inter-organizational efficiency, and pricing.
Measures of Organizational Impact
The type of variables that each researcher focused on influenced how researchers
measured the impact of information systems on an organization. Researchers who focused
on internal variables used financial measures such as return on investment (Vasarhelyi,
1981) and cost/benefit analysis (Johnston & Vitale, 1988). Other researchers included non-
financial measures. For example, Jenster (1987) examined productivity, innovations, and
product quality.
Mahmood and Soon (1991) developed a measure to assess the impact of information
systems on several of the strategic variables mentioned in the previous section. The
instrument included 50 items beginning with the phrase, “To what extent do you think
information technology...”, measured on a 5-point Likert-type scale from 1 (no extent) to 5
(very great extent). Sabherwal (1999) developed and tested a measure of the impact of
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
37
information systems on overall organizational performance. His measure consisted of five
items measuring the impact of information system in areas such as cost reduction,
improvement of the organization’s image, and customer satisfaction. The Cronbach’s alpha
for this measure was 0.84. The inter-rater reliability measure was also tested and supported.
Using Van de Ven and Ferry’s (1980) criteria of perceived unit performance, Iivari
and Ervasti (1994) developed and tested a measure for the impact of information technology
on organizations. The measure assesses the impact on quantity of output, quality of output,
innovations, reputation for excellence, and morale.
Integrated Models of Information System Success
In contrast to researchers in the second section, researchers in this section attempted
to develop comprehensive models for information system success based on the studies cited
in the second section. Using these models, researchers have attempted to clearly identify
information system success dimensions, the relationships between these dimensions, and the
relationships between these dimensions and other organizational variables.
DeLone and McLean (1992) were among the first researchers to develop a
comprehensive model for IS success. They did this by first conducting a comprehensive
review of different information system success measures; then they developed a scheme for
classifying all IS success measures. This scheme included six categories (or dimensions):
1. System Quality
2. Information Quality
3. Information Use
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
38
4. User Satisfaction
5. Individual Impact
6. Organizational Impact.
Next, the researchers developed a model for IS success. DeLone and McLean
argued that this model “recognizes success as a process construct which must include both
temporal and causal influence in determining I/S success” (p. 83).
As illustrated in Figure 1, DeLone and McLean arranged the six information system
success categories (dimensions) listed above to suggest two things: (1) the interdependence
between these dimensions; and (2) the time sequence or causal ordering of these dimensions.
The DeLone/McLean model proposes that SYSTEM QUALITY and INFORMATION
QUALITY singularly and jointly affect both SYSTEM USE and USER SATISFACTION.
Additionally, the amount of SYSTEM USE can affect the degree of USER
SATISFACTION - positively or negatively - and the degree of USER SATISFACTION
also affects SYSTEM USE. SYSTEM USE and USER SATISFACTION are direct
antecedents of INDIVIDUAL IMPACT. Lastly, this IMPACT on individual performance
should eventually have some ORGANIZATIONAL IMPACT (DeLone & McLean, 1992, p.
82, 87).
In order to effectively use this model, the researchers suggested two things. One is
to systematically combine individual measures from the information systems success
categories (dimensions) for the purpose of creating a comprehensive measurement
instrument. Second, contingency variables (such as structure, size, and environment of
organization) should be taken into account when selecting an information system measure of
success.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
DeLone and McLean called for further development and validation of their model.
This call motivated many researchers to test, expand, and modify DeLone and McLean’s
model. In fact, most o f the studies that attempted to develop a comprehensive model or
partial model for information system success were based on DeLone and McLean’s model.
Myers, Kappelman, and Prybutok (1997) note that DeLone and McLean’s model is
the most comprehensive IS assessment model offered by existing IS research. However, as
noted earlier in the chapter, the relationship between an IS and its external environment is
not conceptually included in the model.
Seddon and Kiew (1994) tested part of DeLone and McLean’s model. The
researchers proposed the causal paths among the six variables in the model as illustrated in
Figure 2. The researchers tested the relationships among the four variables in the box after
replacing use with usefulness and adding a new variable called “user involvement.” They
found support for the relationships between the specified variables. The correlation analysis
in Seddon and Kiew’s study indicated that the four variables are directly associated, with
Pearson correlation coefficients ranging from .5468 to .7302.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
copyright ow
ner. Further
reproduction prohibited
without
permission.
Individualim pact
SystemQuality
InformationQuality
Use
UserSatisfaction
OrganizationalImpact
Figure 2. Seddon and Kiew model of information system successSource: Seddon and Kiew,1994.
o
41
Glorfeld (1994) represented the relationships among the variables in DeLone and
McLean’s model as:
IT effectiveness = /(SQ,IQ, SU, II, 01)
SU = /(SO,IO,US)
US = / (SQ,IO, SU)
11= /(S U ,U S )
01 = /(I I )
After combining User Satisfaction, System Quality, and Information Quality into one
variable called “satisfaction,” Glorfeld tested the model (see Figure 3). His findings
supported the relationships among the variables except the relationship between individual
impact and organizational impact. There was a significant negative relationship between
these two variables. Glorfeld argued that is could be due to the small sample size or to the
composition of the sample.
Garrity and Sanders (1998) extended the user satisfaction variable in DeLone and
McLean’s model, proposing that task support satisfaction, quality of work life satisfaction,
interface satisfaction, and decision-making satisfaction are the constructs that underline any
measurement of user satisfaction.
Seddon (1997) modified and extended DeLone and McLean’s model by discussing
more deeply the meaning of information system use and adding four new variables
(Expectations, Consequences, Perceived Usefulness, and Net Benefits to Society) to the
model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
SystemUsage
ITEffectiveness
Individual Impact
rganizationan
Satisfaction
Figure 3. Glorfeld Four Variables Model of Information System SuccessSource: Glorfeld,!994.
Reproduced
with perm
ission of the
copyright ow
ner. Further
reproduction prohibited
without
permission.
f t o t k l b ita T lo ra l m o d rl o f IS U se
Expectation about the net benefits of
future IS use
IS Use Individual. Qtganiiaiirmal. end Societal CgnifiaiiGBGfiS o f IS Use (eot evsleeied es ekher §ood er bed)
1| Observation.| Personal Experience, mod I Rqxnts from Others
Peedback (Partial basis far revised expectations)
1. Measures of XfifolDBtfiott A System Quality
SystemQuality
Infnrmatinn
Quality
XOtMVll flHCCplttd Measures o f Net Benefits o f IS Use
PerceivedUseftilness
UserSatisfaction
3. Other Measures of Net Benefits of IS UseNet benefits to:Individuals
| Organisations |
Society ]e* .. Volitiooal IS lire
IS Si Modal
Kmy:Rectangular boxes IS Success modelRounded boxes Partial behavioral model o f IS UseSolid-line arrows Independent (necessary and sufficient) causalityDotted-line arrow T»nn»m-> (not causal, since observer’s goals are unknown)
Figure 4. Seddon model of information system successSource: Seddon,!997.
4*U>
44
As illustrated in Figure 4, Seddon’s model addressed the external effect of
information systems represented by societal consequence. This effect is not a measure of IS
success, but rather is a description of an outcome attributed to IS use. Moreover, the
relationship between the consequences and the IS success model is an influence, not a cause.
This means that in this model, the effect o f the external environment on information systems
is very weak.
In one of his recommendations, Seddon drew the attention to the multiple people
who evaluate information systems and how measures should reflect this characteristic.
Researchers need to think carefully about who is to be asked to do the evaluation, and what those peoples’ interests are in the outcomes of the evaluation process. Subjects and measures should then be chosen accordingly (Seddon, 1997, p. 252).
Ishman (1998) developed an instrument for measuring three variables of DeLone and
McLean’s model: system quality, information quality, and user satisfaction. The main goal
of this study was to develop an instrument that could be applied across-cultural
environments. The study’s population included individuals from five countries: Mexico,
The People’s Republic of China, the United States, Latvia, and the English and French
speaking geographic regions of Canada. Ishman’s instrument was based on the works of
Baroudi and Orlikowski (1988), Joshi (1990), and Kappelman (1990). Of this instrument,
eight items measured information quality and system quality while one item measured user
satisfaction. The instrument was tested and validated using Straub’s (1989) and Churchill’s
(1979,1996) recommended approaches to validated instrument in the MIS field.
Ballantine, Bonner, Levy, Martin, Munro, and Powell (1997) argued that DeLone
and McLean’s model was not complete and they attempted to build a model that could
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
45
overcome the perceived weaknesses in DeLone and McLean’s model. They called their
model “The 3-D Model Of Information Systems Success.” In this model, they took into
account external factors, based on their belief that, “Information gained from systems is
more likely to be used in the wider context of supporting value chain activities and more
open management than for purely internal consumption (Ballantine et al., p. 10).
The 3-D model includes three levels and three filters between the three levels. First,
there is the development level, which includes variables such as user involvement and
system type. Next is the deployment level, which includes variables such as user
satisfaction, user skills, and task impact. Last is the delivery level, which includes variables
such as use of output, benefits management, and support of champion. Between these levels
are three filters that affect the three levels. The implementation filter is between the
development and deployment levels. The integration filter is between the deployment and
delivery levels. Finally, there is the environment filter, which comes after the delivery level.
The researchers argued that information system success is influenced by factors that
exist in the environment, such as competitor movement and political, social, and economic
factors. These factors are not in the control of the organization. The researchers explained
that the environmental filter was added to the model because it has implications for
measurement of success. For example, the ability of an information system to reach its
organizational goals could be hindered by factors outside the organization.
No follow up conceptual or empirical studies have been conducted to extend or
validate the 3-D Model of information systems success. This may be due to the complexity
of this model, which makes empirical testing very difficult.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
46
Literature Abstract and Assessment
The literature review chapter contained three sections. The following is an abstract
and assessment of the three sections.
The External Environment and Information Systems
The first section of the literature review dealt with the relationship between the
external environment and information systems within public organizations, including several
studies that addressed the implications of organizational dependency on the external
environment on the evaluation of information systems in public organizations (see Bozeman
& Straussman, 1990; Newcomer, 1991). Studies that investigated the relationships between
information systems in public organizations and the external environment have concluded
that there is very close interdependency between information systems in public
organizations and the external environment (see Stevens & McGowan, 1985; Bozeman &
Bretschneider, 1986).
Several researchers have empirically tested the interdependency between information
systems in public organizations and the external environment. The findings of these studies
indicate that information systems in public organizations are more dependent on the external
environment than those in private organizations (see Bretschneider & Wittmer, 1993;
Bretschneider, 1990). Some researchers argue that failure to recognize this interdependency
could lead to catastrophic results (Bretschneider & Wittmer, 1993).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
47
Mansour and Watson’s (1980) empirical study tested the applicability of the private
sector IS models on the public sector. These researchers concluded that several external
variables in private sector IS models (e.g., amount of competition, variety of products
offered by the organization, the frequency with which the organization offers new products,
etc.) are not applicable to public sector organizations because public organizations function
in a different environment that the one faced by private sector organizations.
Some researchers have investigated the implications of the dependency on the
external environment on the evaluation of information systems in public organizations.
These writers argued that evaluation of information systems in public organizations must be
extended to include those actors in the external environment who can influence these
systems (see Bozeman & Straussman, 1990; Newcomer, 1991).
Many measures were offered to evaluate information system in public organizations.
These measures include accuracy, applicability, timeliness, User Satisfaction, attitude of
both managers and users (Stevens and McGowan, 1985); timely and accurate response to
external requests (Bozeman and Bretschneider, 1986); usefulness and reliability, ease of use,
time saving, user acceptance, meeting legislative requirements (Newcomer, 1991); and
public official satisfaction (Bozeman & Straussman, 1990).
Studies of Information System Success
DeLone and McLean’s taxonomy (1992) was used to organize the studies in this
section. Their taxonomy includes six dimensions of information systems success (system
quality, information quality, system use, user satisfaction, individual impact, and
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
48
organizational impact). Under each of these dimensions, many studies were discussed in
terms of what variables were included and how the dimension was measured.
In terms of the system quality variable, several studies found a relationship between
system quality and user involvement. Many researchers have developed instruments to
measure system quality, some of which are in wide use because these instruments have
demonstrated that they are reliable and valid through several studies (see Doll & Torkzadeh,
1988; Bailey & Pearson, 1983). Moreover, system acceptance was found to relate to ease of
use and usefulness. User satisfaction and system usage were not found to be indicators of
system quality.
Many variables were found to relate to the Information Quality variable (e.g., user
participation in the system design, cognitive style, and external sources). As with the
preceding variable, information quality was measured using both single-item and multi-item
scales.
Under the system use variable, individual perceptions, user involvement, situational
variables, ease of use of the system, degree of social influence exerted by supervisors,
perceptions of the social presence of the system, and user satisfaction were found to relate to
system use (e.g., King & Rodriguez, 1978; Kim & Lee, 1986; Lucas, 1975b; Baroudi et al.,
1986). System use was measured through actual daily use of the computer, frequency of
use, the number o f software applications used by the participants, the number of tasks the
system is used for (Igbaria, 1992; Anakwe, Anandaeajan, & Igbaria, 1998), how many times
the system was used and willingness of use (Kim & Lee, 1986), and how extensively
information technology is utilized in an organizational context for decision support, work
integration, and customer service functions (Doll & Torkzadeh, 1998).
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
49
Under the user satisfaction variable, organizational maturity, management style,
contingency variables, users’ expectations, and usage were found to relate to user
satisfaction (see Mahmood & Becker, 1985/1986; Ginzberg, 1981; Lu & Wang, 1997;
Khalil & Elkordy, 1999). The popularity of using this variable to measure information
system success motivated many researchers to develop measures of this variable. Among
the most used were the measurements developed by Bailey and Pearson (1983) and Doll and
Torkzadek (1988).
With regard to the individual impact and organizational impact variables, there were
mixed results concerning whether the information system had a positive or a negative effect
on individuals or on organizations. Measuring this effect proved to be very difficult; few
studies have attempted to do so (see Millman & Hartwick, 1987; Doll & Torkzadeh, 1998;
Figure 5. A Comprehensive Model for Evaluating IS in Public Organizations
55
Hall (1972) and Miles (1980) also proposed two types of external environment. Hall
mentioned two types of environmental conditions: general conditions (those conditions of
concern to all organizations, such as the economy and demographic changes) and specific
environmental conditions (specific environmental influences on the organization, such as
other organizations with which it interacts or particular individuals who are crucial to it).
Hall noted that interactions in the specific environment are direct, while the general
environment “is not a concrete entity in interaction, but rather comprises conditions that
must be grappled with” (p. 298).
Miles (1980) agreed with the concepts of general environment and specific
environment. Miles includes those conditions that are important for the whole classes of
organizations (e.g., technological conditions, legal conditions, political conditions, etc) in
the general environment, asserting that these conditions are potentially relevant for an
organization, but do not have day-to-day interaction within the organization. Miles explains
that the general environment has an impact on both the organization and its specific
environment. On the other hand, Miles notes that conditions in the specific environment
have immediate relevance with the organization and have direct interaction with the
organization. This is equivalent to Thompson’s concept of task environment.
In the information system literature, Ives and Davis (1980) proposed a model for IS
research using two information system environments: the external environment and the
organizational environment. Ives and Davis defined external environment as including
legal, social, political, cultural, economic, educational, resource, and industry/trade
considerations. Variables in the external environment can affect information systems within
organizations through the resources and constraints that these variables can impose or offer.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
56
For example, legislative budgetary requirements could impose constraints on the resources
available for IS development.
According to Ives and Davis, the organizational environment is marked by the
organizational goals, tasks, structure, volatility, and management philosophy/style. These
variables can affect IS development and management. For example, the centralization or
decentralization of the organizational structure can affect on how information is developed
and managed.
In the public management information system literature, Bozeman and Bretschneider
(1986) proposed frames similar to those proposed by Ives and Davis (1980). Bozeman and
Bretschneider (1986) maintain that the frame for public management information system
research consists of three levels: society, organization, and individual. The society level
includes environmental variables that “define resources and constraints on MIS”; the
organizational level includes variables within the organizational context that affect
information system such “size, structure, time frame, organizational resources, and
organizational maturity”; and the individual context “reflects characteristics of individual
actors within an organization, including cognitive style, level of satisfaction with MIS, and
other such personal and demographic” information (pp. 475-478).
Bozeman and Bretschneider (1986) further elaborated on their frame for public
management information systems by combining the previous variables into four models of
publicness and proposing two types of environment. The environmental variables were
included in two models (economic authority model and political authority model), which
includes the unique economic and political characteristics of public organizations. The
organizational variables were included in a third model (work context model) and the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
57
individual variables were included in a fourth model (personnel and personnel system
model).
Bozeman and Bretschneider (1986) contend that the four models are located in two
types of environment: the economic authority model and the political authority model are
located in the distal environment, and the work context model and the personnel and
personnel system model are located in the proximate environment. Bozeman and
Bretschneider (pp. 480-481) stated:
[T]he models are interrelated because they stand in hierarchical relation. The Political Authority and Economic Authority models comprise the distal environment and introduce constraints which are broad and sweeping (e.g., market failures, public interest) and these remote factors of the distal environment can be viewed as directly influencing the "proximate" environment (i.e., the Work Context Model), which in turn directly influences the attitudes and behaviors of individuals in organizations (e.g., the Personal Model).
Thus, we have explained that there are three types of environment that an
information system lives in. The first environment includes variables that exist within the
organization, the second environment includes external variables that have immediate
relevance and direct interactions with the organization, and the third environment includes
external variables that have potential relevance and do not have direct interaction with the
organization. Although different researchers have different names for these types of
environments, the different terms ultimately mean the same types of environment.1
1 It is interesting to note that some researchers have used the term ‘organizational environment’ to refer to conditions that exist within the organization. From a systems theory viewpoint, the term organizational environment denotes everything that exists outside the organizational boundaries. This situation had led to some confusion regarding the variables that exist within this type of environment In order to prevent further confusion regarding the variables that exist within each frame, the inner frame in the model in Figure 5 will be called ‘organizational boundary.’
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
58
Thus, the three environments will be incorporated in DeLone and McLean’s model
as three frames within each other. The organizational boundary frame includes all internal
variables that exist within the organizational boundaries. The middle frame, task
environment, includes those external variables that have immediate relevance and direct
interactions with the organization. The outer frame, general environment, includes those
external variables that have potential relevance and do not have direct interaction with the
organization. These titles were chosen because of their familiarity in the literature.
As step four in the process of developing the Seven-Dimension model, a seventh
dimension was added to DeLone and McLean’s model. Based on the first section of the
literature review, this seventh dimension is called External Environment Satisfaction (EES).
EES denotes the satisfaction of external stakeholders that use an information system or its
outputs, and could directly or indirectly influence the information system. This influence
could be, for example, through many of the constraints that could be imposed on public
organizations from the external environment (i.e., legal and budgetary constraints).
Figure 5 represents the final product after finishing all the steps. The causal paths
among the seven dimensions in the model are represented mathematically as:
01 = / (SQ, IQ, SU, US, IM, EES, OB) (3A.1)
IM = / (SU, US, EES, OB) (3.A.2)
SU= / (SQ, IQ, US, EES, OB) (3.A.3)
US= / (SQ, IQ, SU, EES, OB) (3.A.4)
SQ= / (OB, EES) (3.A.5)
IQ= / (OB, EES) (3.A.6)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
59
where SQ, IQ, SU, US, IM, and 01 represent System Quality, Information Quality, System
Use, User Satisfaction, Individual Impact, and Organizational Impact. EES represents the
External Environment Satisfaction. OB represents the effects of factors within the
Organizational Boundary that affect the previous six variables, such as size of the
organization and control of the information system (centralized vs. decentralized). The
model operationalization section presents the definitions and how these variables are
measured.
Equation 3.A.1 suggests that Organizational Impact is determined directly by
Individual Impact and indirectly by the rest of the variables in the model through affecting
Individual Impact; moreover, factors within the Organizational Boundary and External
The Pearson correlation coefficients in Table 5 clearly indicate that there are strong
direct associations among the variables in the study. The largest correlation coefficient is
between System Quality and Individual Impact (r = 0.83) and between System Quality and
Organizational Impact (r = 0.81). The smallest correlation coefficient is between User
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
84
Satisfaction and System Usage (r = 0.50). The rest of the correlation coefficients are
between.50 and .80. All correlation coefficients are statistically significant at the.01 level.
The high Pearson coefficients in Table 5 are in line with other studies in the
literature. For example, Seddon and Kiew (1994) tested part of DeLone and McLean’s
model. The researchers proposed those causal paths among the six variables of the model as
illustrated in Figure 2. The researchers tested the relationships between the four variables in
the box after replacing use by usefulness and adding a new variable called “user
involvement.” The correlation analysis in Seddon and Kiew study indicated that the four
variables are directly associated, with Pearson correlation coefficients ranging from .55 to
.739. Seddon and Kiew (1994, p. 109) commented on these high correlations by saying
Such high correlation of multi-factor measures and overall satisfaction measures are not uncommon. Bailey and Pearson (1983, p. 536) report a correlation of .79 between their normalized importance-weighted measure of User Satisfaction (based on up to 39 questions) and their single-scale measure of overall.. .satisfaction.
Likewise, in several empirical studies and in the context o f developing a new
measure of usefulness and perceived ease of use, Davis (1989) found that usage is directly
associated with usefulness and perceived ease of use with Pearson correlation coefficients
ranging from .45 and .85.
Nevertheless, the high Pearson correlation coefficients in Table 5 raise serious
concern that there may be two problems among the variables. First, there may be two or
more variables that measure the same concept. In other words, there is concern regarding
the unity and number of concepts and variables in this study. The second problem is multi-
collinearity that exists when there is high correlation among the independent variables.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
85
Consequently, these two problems have to be investigated before going further in the
data analysis. The first problem will be investigated using factor analysis. The second
problem will be investigated through the use of the VIF measure, which is a specific
measure that is used to test for multicollinearity.
Factor Analysis
This section presents the third step in the data analysis plan. This step checked the
unity and number of concepts and variables in the study. The results are reported here for
the factor analysis that investigated whether multiple variables measured the same concept.
In factor analysis, this is accomplished by examining the loading of each item on the factors
produced by the factor analysis. In the literature, there is no agreement on the cutoff of the
degree of loading to include an item under a specific factor. For example, Churchill (1987)
argued for a cutoff of 0.35 or 0.30. On the other hand, Rencher (1998) argued that a cutoff
of 0.30 is unacceptable. Hair, Anderson, Tatham, and Black (1992) argued that loadings
greater than 0.50 are considered very significant. Because this the first study conducted in
public organizations to evaluating information systems and there are no established
measures in this sector, this study uses 0.60 as the cutoff for item loading and an eigenvalue
of 1.
In factor analysis, when a group of items loads highly on one factor, these items are
considered the items that measure this factor. In some cases, the factors produced and the
items loading perfectly correspond to the variables used and the items used to measure these
variables. However, in other cases, this correspondence does not take place. To solve this
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
86
problem, the researcher might change the variables he is using and create new variables.
The new variables will be the factors produced by the factor analysis and the items that
loaded highly on it. However, in creating the new variables, statistical reasons should not be
the only rationale. Conceptual considerations should be taken into account (Lich, 1998). In
other words, the researcher has to go back to the literature and see whether the items that
loaded highly on one factor are used, in the literature, to measure similar concepts. If the
answer is yes, then grouping these items is conceptually and statistically correct. However,
if the answer is no then grouping of these items is statistically correct but theoretically
incorrect. In this study, the researcher paid attention to both statistical and theoretical
considerations.
Two factor analyses were conducted in this study. The first analysis included all
items that measure the independent variables (System Quality, Information Quality, System
Usage, and User Satisfaction) while the second analysis included items that measure the two
dependent variables (Individual Impact and Organizational Impact). An iterative approach
was used to conduct factor analysis. Items that did not make the loading cutoff and/or items
that loaded on more than one factor were dropped from the analysis. The remaining items
were than resubmitted into another round of factor analysis. This process continued until the
researcher reached a meaningful factor structure.
Factor Analysis - Independent Variables (IQ, SO, US, SU)
In total, 40 items are used to measure the four independent variables (SQ, IQ, US,
SU). These items were entered into the principal component factor analysis with varimax
rotation.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
87
Table 6 shows the eigenvalue of each factor of the four factors that were extracted.
Table 6
Eigenvalue of Factors
Eigenvalue % Variance % Cumul. Variance
Factor 1 11.50 28.75% 28.75%
Factor 2 3.84 9.59% 38.34%
Factor 3 2.63 6.58% 44.92%
Factor 4 2.15 5.38% 50.30%
Eigenvalue refers to the amount of variance that a factor can account for. It is clear
from Table 6 that all four factors have eigenvalues greater than 1.0, which is the cutoff
adopted in this study. Factor 1 has the largest eigenvalue (11.50) and explains 28.75 % of
the variance. In total, the four factors explain 50.30 % of the variance.
Table 7 shows the factor loading after using varimax rotation. The following is a
discussion of the loading findings.
The System Quality scale
This scale consists of seven items that were borrowed from Bailey and Person
(1983). The scale asks the user about the time lapse between the request for data and the
response to that request, the ease and difficulty of using the system, ease and difficulty of
the sentences and words used in the system, balance between cost and benefits, trust in the
system, flexibility of the system, and connectivity of the system. Six items (SQ1, SQ3, SQ4,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
88
SQ5, SQ6, SQ7) have loaded highly on Factor 1, with loading ranging from .72 to .87. Item
SQ2 did not make the cutoff, so it was dropped from further analysis.
Information Quality Scale
This scale consists of nine items borrowed from Bailey and Person (1983). The scale asks
the use about information correctness, information availability, output variability,
information consistency, age of information, information comprehensiveness, display of
output, amount of information, and degree of congruence between what the users need and
output. Six items (IQ1, IQ2, IQ4, IQ5, IQ7, IQ9) loaded highly on Factor 1, with loadings
ranging from .60 to .80. Items IQ3, IQ6, and IQ8 did not make the cutoff, so they were
dropped from further analysis.
The System Usage Scale
This scale consists of 20 items that were borrowed from Igbaria, Pavri, and Huff
(1989). The scale measures usage through actual daily use of the computer, frequency of
use, the number of packages used by the participants, and the number of tasks the system is
used for. The 20 items on this scale did not load on a single factor; rather, they loaded on all
four factors. Item SU2 loaded on Factor 1 with loading of .60. Items SU6, SU7, SU8, SU9,
and SU10 loaded highly on Factor 2, with loadings ranging from .78 to .85. Items SU12,
SU16, and SU17 loaded highly on Factor 3, with loading ranging from .60 to .68. Items
SU3, SU4, SU5 loaded on Factor 4 with loadings ranging from .68 to .75. Items SU1, SU2,
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
89
Table 7
Factors of Independent Variables: Rotated Factor Matrix
Items 1 2 3 4
SQ1 Time between request and the fulfillment of request 0.87 0.10 0.01 0.01
SQ3 Sentences and words used to interact with system 0.85 0.13 0.00 -0.02
SQ4 Balance between cost and benefit 0.82 0.04 -0.09 -0.08
SQ5 Trust in the system output 0.85 0.08 0.01 0.02
SQ6 System ability to change 0.72 0.05 0.06 0.02
SQ7 System ability to connect to other organizations 0.77 0.07 -0.1 -0.08
IQ I Information correctness 0.67 -0.03 0.11 -0.12
IQ2 Time information available compared to time needed 0.70 -0.07 0.16 0.03
IQ4 Information consistency 0.72 0.00 0.13 -0.11
IQ5 Age of the information 0.80 -0.07 0.09 0.10
IQ7 Martial design of the display of the output 0.61 -0.09 0.08 0.14
IQ9 Degree of congruence between what the user wants and the output 0.65 0.05 -0.09 -0.04
US1 How adequately does system meet information needs 0.80 0.01 0.09 0.00
US2 How efficient is the system? 0.84 0.03 0.07 -0.02
US3 How effective is the system? 0.87 0.03 0.08 -0.07
US4 General satisfaction with the information system 0.85 0.04 0.02 0.00
SU3 Extent of use in Historical References task 0.07 0.16 -0.02 0.68
SU4 Extent of use in Looking for trends task 0.01 0.35 0.00 0.75
SU5 Extent of use in finding problems and alternatives task -0.03 0.28 0.01 0.75
SU6 Extent of use in Planning 0.09 0.79 0.16 0.19
SU7 Extent of use in Budgeting -0.02 0.83 0.07 0.10
SU8 Extent of use in communication 0.04 0.81 0.20 0.03
SU9 Extent of use in controlling and guiding activities task 0.05 0.85 0.13 0.06
SU10 Extent of use in decision making task -0.01 0.83 0.10 0.13
SU12 Package used in the job (Word Processing) 0.03 0.07 0.60 -0.11
SU16 Package used in the job (Graphics) 0.13 0.09 0.68 0.05
SU17 Package used in the job (communication) -0.06 0.09 0.61 -0.07
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
90
SU11, SU13, SU14, SU15, SU18, SU19, and SU20 did not make the cutoff, so they were
dropped from further analysis.
The User Satisfaction Scale
This scale consists of four items that were borrowed from Seddon and Yip (1992).
The scale measures system adequacy, system efficiency, system effectiveness, and general
satisfaction with the system. All four items loaded highly on Factor 1 with loadings ranging
from .80 to .87.
Table 8 summarizes the items eliminated from further analysis because they did not
make the cutoff. Table 9 summarizes the findings of principal component factor analysis on
the independent variables after dropping the items that did not make the cutoff.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
91
Table 8
Summary of Items Eliminated from Further Analysis
SQ2 The ease and difficulty of using the system potential
IQ3 Variability between output information and what should be gotten
IQ8 Amount of information
IQ6 Information comprehensiveness
SU1 Time spent (in hours) using the system during working hours
SU2 Average use of the information system
sun Package used in the job (Spreadsheet)
SU13 Package used in the job (Data management)
SU14 Package used in the job (Modeling System)
SU15 Package used in the job (Statistical system)
SU18 Package used in the job (Programming)
SU19 Package used in the job (4GL)
SU20 Package used in the job (other packages)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
92
Table 9
Summary of Item Loadings
Items 1 2 3 4
SQI Time between request and fulfillment of request 0.87 0.10 0.01 0.01
SQ3 Sentences and words used to interact with system 0.8S 0.13 0.00 -0.02
SQ4 Balance between cost and benefit 0.82 0.04 -0.09 -0.08
SQ5 Trust in the system output 0.85 0.08 0.01 0.02
SQ6 System ability to change 0.72 0.05 0.06 0.02
SQ7 System ability to connect to other organizations 0.77 0.07 -0.1 -0.08
IQ I Information correctness 0.67 -0.03 0.11 -0.12
IQ2 Time information available compared to time needed 0.70 -0.07 0.16 0.03
IQ4 Information consistency 0.72 0.00 0.13 -0.11
IQ5 Age of the information 0.80 -0.07 0.09 0.10
IQ7 Martial design of the display of the output 0.61 -0.09 0.08 0.14
IQ9 Degree of congruence between what user wants and output 0.65 0.05 -0.09 -0.04
USI How well does the system meet the information needs 0.80 0.01 0.09 0.00
US2 How efficient is the system? 0.84 0.03 0.07 -0.02
US3 How effective is the system? 0.87 0.03 0.08 -0.07
US4 General satisfaction with the information system 0.85 0.04 0.02 0.00
SU6 Extent of use in Planning 0.09 0.79 0.16 0.19
SU7 Extent of use in Budgeting •0.02 0.83 0.07 0.10
SU8 Extent of use in communication 0.04 0.81 0.20 0.03
SU9 Extent of use in controlling and guiding activities task 0.05 0.85 0.13 0.06
SU10 Extent of use in decision making task -0.01 0.83 0.10 0.13
SU12 Package used in the job (Word Processing) 0.03 0.07 0.60 -0.11
SU16 Package used in the job (Graphic) 0.13 0.09 0.68 0.05
SU17 Package used in the job (Communication) -0.06 0.09 0.61 -0.07
SU3 Extent of use in historical references task 0.07 0.16 -0.02 0.68
SU4 Extent of use in looking for trends task 0.01 0.35 0.00 0.75
SU5 Extent of use in finding problems and alternatives task -0.03 0.28 0.01 0.75
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
93
These loadings posed two dilemmas for the researcher. Should he combine all items
that loaded highly on Factor 1 into one variable? Should he deconstruct the usage variable
to three constructs according to the items loadings? If yes, does this coincide with the
literature, or did other writers deconstruct the usage variable to three constructs?
The loading of items that measure Information Quality, System Quality, and User
Satisfaction on one factor is not uncommon. Several writers have reached the same results
using factor analysis and examining two or more of these variables. For example, Ishman
(1996) found that item that measured User Satisfaction loaded with the composite measure
that was used to measure Information and System Quality. Ishman said, “it might be
concluded from this result that this single-item [User Satisfaction] is measuring the same
dimension of information success as the eight items it loads with” (p. 25). McHaney et al.
(1999) tested the reliability of the end user computing satisfaction measure (EUCS). This
scale is a composite of several items that measure Information Quality and System Quality
(e.g., items: SQ1, SQ3, IQ9, SQ5, IQ1, IQ2, IQ5, IQ7, IQ9). Using factor analysis, the
writers found that all items loaded on one factor, with loading values ranging from .76 to
.94. Glorfeld (1994) combined System Quality, Information Quality, and Satisfaction into
one variable, which he called Satisfaction (Figure 3). This was done after conducting a
factor analysis where all items that measure the three variables loaded on one factor.
Accordingly, supported by the statistical evidence found in this study through the use
of principal component factor analysis with varimax rotation and the conceptual evidence
found through the work of other researchers on the same variables, the researcher decided to
combine all items that loaded highly on Factor 1 into one variable. The only exception was
Item SU1 because this item measures the average use of the information system; thus, the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
94
conceptual base of this item does not coincide with the rest of the items. The new variable
was called Satisfaction (as it was called by Glorfeld) to contribute to building unified
concepts in the information system and public management information system fields.
Regarding System Usage, several researchers have dealt with System Usage as a
multi-dimensional concept (Igbaria, 1992; Igbaria et al., 1989; Kim & Lee, 1986). The
dimensions that these researchers identified included actual daily use, frequency of use,
number of packages used, level of sophistication of usage, and inclusion of computer
analysis in decision-making usage as measured by the number o f tasks the system is used in.
None of these studies, however, used factor analysis to analyze the inter-correlations of the
items that were included under each dimension.
The statistical evidence in this study indicated that usage is not a unitary construct
and could be deconstructed into three constructs. These three constructs correspond with
several dimensions identified by other researchers. For example, SU6 , SU7, SU8 , SU9, and
SU10 correspond to the inclusion of computer analysis in the decision-making dimension;
SU12, SU16, and SU17 correspond to the number of packages used; and SU3, SU4, SU5
could be considered as a subset of the inclusion of computer analysis in decision-making
dimension.
Because this is the first study to evaluate information systems in the public sector
that attempts to develop a comprehensive model - and in order to avoid further
complicating the investigation and analysis of the study’s model - the researcher chose to
consolidate all usage items that loaded on Factors 2,3, and 4 into one variable called System
Usage. Deconstructing the System Usage variable, and how this could affect the
relationships with other variables in the model, will be left to future research.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Factor Analysis- Dependent Variables (IM, 01)
95
This subsection presents the findings of the factor analysis conducted on the two
dependent variables Individual Impact and Organizational Impact. In total, 18 items were
used to measure the two dependent variables. These 18 items were entered into the principal
component factor analysis with varimax rotation. Table 10 shows the eigenvalue of each
factor of the two factors that were extracted.
Table 10
Eigenvalue of Factors
Eigenvalue % Variance % Cumul. Variance
Factor 1 6.27 36.90% 36.90%
Factor 2 5.02 29.56% 66.46%
Table 10 indicates that both factors have eigenvalues greater than 1.0, which is the
cutoff point for this study. Factor 1 has the largest eigenvalue (6.27) and explains 36.90 %
of the variance. In total, the two factors explain 66.46 % of the variance.
Table 11 shows the factor loading after using varimax rotation. The following is a
discussion of the loading findings.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
96
The Individual Impact Scale
This scale consists of ten items that were borrowed from Doll and Torkzadeh (1998).
The instrument measures the impact of information system on four works aspects (task
productivity, task innovation, customer satisfaction, and management control). All ten items
loaded highly on factor one with loadings ranging from 0.69 to 0.79. These loadings exactly
coincide with the conceptual grouping provided in Chapter 3.
The Organizational Impact Scale
This scale consists of eight items. Five items were borrowed from Sabherwal (1999)
and three items were borrowed from Mahmood and Soon (1991). Seven items of this scale
have loaded on one factor with loadings ranging from 0.63 to 0.79. Item 012 that measures
the impact of information system on reducing administrative costs did not make the cutoff.
Thus, it was eliminated from the analysis at the second round of factor analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
97
Table 11
Factors of Dependent Variables: Rotated Factor Matrix
Items 1 2
IM1 Accomplished more work using the information system 0.69 0.39
IM2 Information system leads to increased productivity 0.77 0.34
IM3 Information system saves time 0.70 0.36
1M4 Information system helps apply new methods to do the job 0.77 0.16
1M5 Information system helps meet customer needs 0.74 0.44
1M6 Information system led to increased customer satisfaction 0.72 0.50
1M7 Information system led to improved customer service 0.73 0.51
IM8 Information system helps management control the work process 0.75 0.40
IM9 Information system improves management control 0.79 0.28
IM10 Information system helps management control performance 0.75 0.39
Oil Distinguishes the organization from other organizations 0.30 0.66
013 Improves the efficiency of internal operations 0.24 0.78
014 Enhances organizational reputation 0.36 0.64
015 Enhances communication with other organizations 0.32 0.79
016 Enhances and improved coordination with other organizations 0.45 0.70
017 Improves decision making 0.26 0.63
018 Overall, makes the organization successful 0.35 0.74
The Implications of Results of Factor Analysis on the Study’s Model
This section presents the implications of the results o f factor analysis on the study’s
model in terms of modifying the relationships in the model, modifying the study’s research
question, and the study’s hypothesis. Figure 7 depicts the 4-factor model that was produced
by factor analysis.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
98
Q.
• O c d
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Figu
re
7. St
udy
mod
el a
fter
fact
or
anal
ysis
99
Modifying the 3.B Equations
Based on the outcome o f the factor analysis, the conceptual groupings of all
variables in the study have changed. Thus, it is expected that associations among the study’s
variables have also changed. The equations in Chapter 3 that presented the relationships in
DeLone and McLean’s models have been modified to reflect the new information as a result
of the factor analysis (Figure 7). The modified equations are:
01 = / (SU, STIS, II) (4.A.1)
II = / (SU, STIS) (4.A.2)
SU=/ (STIS) (4.A.3)
STIS= / (SU) (4.A.4)
where STIS, SU, II, and 01 represent Satisfaction, System Use, Individual Impact, and
Organizational Impact.
Equation 4.A.1 suggests that Organizational Impact is determined directly by
Individual Impact and indirectly by the rest of the variables in the model through affecting
Individual Impact. Equation 4.A.2 suggests that Individual Impact is determined directly by
Satisfaction and System Use. Equation 4.A.3 suggests that System Usage is determined
directly by Satisfaction. Equation 4.A.4 suggests that Satisfaction determined directly by
System Usage.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
Modified Research Question and Hypothesis
too
The research question and hypothesis is expected to change to reflect the change in
the relationships in the model and subsequent changes in the equations that represent these
relationships. Consequently, the study’s research question has been modified to: To what
extent is the modified (4-factor) model useful in evaluating information systems in the public
sector?
The following null hypothesis will be tested: The relationships that are indicated in
the 4.A equations do not exist.
Scales Reliabilities
As a result of the factor analysis, most of the measures used in this study have been
modified. Consequently, the reliabilities of these measures have to be determined again.
The reliability analyses for these measures are contained in Tables 12 to 15.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
101
Table 12
Scale Reliability of the Satisfaction Variable_________________Items
1.Time between request and the fulfillment of request
2. Sentences and words used to interact with the system3. Balance between cost and benefit4. Trust in the system output S.System ability to change6. System ability to connect to other organizations7. Information correctness8. Time information available comparing to time it is needed9. Information consistency10. Age of the information11. Martial design of the display of the output12. Degree of congruence between what the user wants and the output13. How adequately does the system meets the information needs14. How efficient is the system?15. How effective is the system?16. General satisfaction with the information system
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
102
Table 13
Scale Reliability of the System Usage Variable
items1. Extend ot use in Historical References task2. Extend of use in Looking for trends task3. Extend of use in finding problems and alternatives task4. Extend of use in Planning5. Extend of use in Budgeting6. Extend of use in communication7. Extend of use in controlling and guiding activities task8. Extend of use in decision making task9. Package used in the job (Word Peocessing)10. Package used in the job (Graphic)11. Package used in the job (communication)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
103
Table 14
Scale Reliability of the Individual Impact Variable
items1. Acomplish more work using the information system2. Information system lead to increasing prodcutivity3. Information system save time4. Information system helps in applying new methods to do the job5. Information system helps in meeting customer needs6. Information system led to increasing customer satisfaction7. Information system led to improving customer service8. Information system helps management control the work process9. Information System improves management control10. Information System helps management control performance
Cronbach's Alpha for Individual lmpact= .95 (N= 287)
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
104
Table 15
Scale Reliability of the Organizational Impact Variable
items1. Distinguish the organization from other organizations2. Improve the efficiency of internal operations3. Enhancing organizational reputation4. Enhancing communication with other organizations5. Enhancing and improving corrdination with other organizations6. Improving decision making7. Overall, making the organization successful
Cronbach's Alpha for Organizational lmpact= .89 (N= 287)
In other words, when users of information perceive that these systems are high
quality systems and produce high quality information, the perceptions of these users that
information systems are making them more productive in terms of providing them with
timely and need information for their work related responsibilities increase. This in turn
leads to increasing the perception that information systems lead to enhancing the
effectiveness of the organization. Moreover, the study findings indicated that when the
perception of having high quality information systems that produce high quality information
increases, the perception that information systems lead to enhancing the effectiveness of the
organization increases.
The three-variable model was tested using regression analysis and path analysis.
Both analyses supported the above relationships. Path analysis’ findings indicated that the
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
129
model fit the data. Both analyses show that satisfaction had significant positive impact on
both Individual Impact and Organizational Impact. Thus, the results of this study provided
support for the three variables model of evaluating information system success (Figures 8,
9).
In summary, the three-variable model of evaluating information system has emerged
from the original six-variable model of DeLone and McLean and the four variable model
through several respecification steps that emerged from previous stages of analysis. Unlike
the two models that precede it, the three-variable model proposes that information systems
success is a three-dimensional model and the relationships between these dimensions are as
indicated in Figures 8 and 9. As such, this study has provided a new empirically test model
of information system success in public organizations.
The original six-variable model of DeLone and McLean and the four variables
model, nevertheless, were useful in giving the general frame that include the possible
dimensions of information system success that could exist in the organizational boundary.
Based on the preceding, the study’s research question was answered.
Potential Contributions and Implications
The study of information systems in public organizations will become very important
as we enter the information age in which usage and investment in public organization
information systems are expected to increase greatly. However, the study o f information
systems in public organizations is still in its infancy in both the public administration theory
and public management literatures. An emerging line of research called public management
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
130
information literature, which started in late 1980s, focused on the investigation of
information systems in public organizations. The literature review for the present study
revealed that most of studies in this literature are descriptive with very few conceptual
studies (Bozeman & Bretschneider, 1986; Stevens & McGowan, 1985). This study makes
an important contribution to this small body of literature.
The study contributes to public administration theory and public management
thinking in several ways. First, this study contributes to the conceptual side of the public
management information system by proposing a comprehensive model for evaluating
information systems in public organizations. This model took into consideration both inside
and outside actors, whose evaluations are crucial in assessing information system
performance. Part of this model was empirically tested in this study.
Consequently, this study is a further step in developing a theory of information
system evaluation in the public sector. Before this study, this issue was never investigated
in a more comprehensive manner. Thus, it extends the work of Bozeman & Bretschneider
(1986), Stevens & McGowan (1985), and other writers who attempted to conceptualize
public management information systems.
Second, the study also contributes to the practical side of public management. Public
managers are ever challenged in the information age. An important and significant
challenge involves how to evaluate the success of information systems, how to justify the
public resources allocation in these systems, and how to ensure the success o f information
systems in positively affecting individuals and the overall organization.
The conceptual model developed and tested in this study has direct implications for
the practice of public management. As a result, it can be called a “dual relevance
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
131
knowledge” (Kaboolian, 1996) or “practical theory” model (Harmon & Mayer, 1994).
Kaboolian (1996) defined dual relevant knowledge as that can benefits both the theoretical
side of the field and the practical side. Harmon and Mayer (1994) defined a practical theory
as the one that either illuminates possibilities for action that would not otherwise be apparent
or stimulates greater understanding of what the person has already been doing.
According to Kaboolian (1996), “one way to go about this [creating dual relevance
knowledge] is to ask questions of relevance to practitioners and test, evaluate, and develop
the insights of the disciplines in the course of the answering those questions”(p.80).
The three-variable model is based on the six-variable model of DeLone and
McLean. These authors developed their model by conducting a comprehensive review of
relevant empirical literature. Consequently, the three-variable model belongs to the type of
theoretical models described by both Kaboolian (1996) and Harmon and Mayer (1994).
Thus, findings of this study can assist public managers in dealing with the
challenges in the information age. The three-variable model and instruments developed and
validated in this study can be used to measure the success of existing information systems in
increasing the effectiveness and efficiency of both the performance of individuals and the
organization. Using the three-variable model and instruments, public managers could use
the results of the evaluation to provide empirical evidence to overseeing actors about the
level of success of their information systems and in turn justifying the public resources
investments in these systems.
Findings o f this study, furthermore, provide guidance on how public managers may
influence the success of information systems within their organizations. Findings of this
study indicate that Satisfaction is a key variable in the three-variable model. The empirical
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
132
evidence in this study suggests that as Satisfaction increases, Individual Impact and
Organizational Impact also increase. The Satisfaction variable measures the satisfaction of
information systems users with the quality of information, quality of systems, and their
overall satisfaction. Thus, public managers may positively influence the success of
information systems through increasing the quality of both the information and the systems
themselves.
Looking deeper into Information quality and System Quality variables, we could
argue that Information Quality reflects the needs of users for information that are necessary
to accomplish their work while System Quality reflects the technical needs and conditions
that should be in place for the information systems to have high quality, such as type of
cables and cooling systems used in buildings and the level of electric power available.
Thus, satisfying of the needs of the technical subsystem (information systems) and
social subsystem (users) could lead to higher level of satisfaction. Public managers could
satisfy these needs using different methods. For example, public managers could allocate
more resources toward buying more powerful information systems.
However, an effective and efficient method to do so should be introduced at the
design stage of information systems and that takes both the needs of both subsystems
(technical and social). In other words, to design a successful information system, public
managers should not follow the technological imperative model, which views technology as
the independent variable that determines other dimensions in an organization. According to
this approach, the introduction of high technology leads to increasing productivity of an
organization. This design approach could increase productivity in the short run; however, in
the long run it is doomed to fail. A logical explanation for why the increase in productivity
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
133
could occur in the short run is stated by Chisholm (1988) when he explained that”
suboptimal designs occur because the employees who operate most high-technology systems
bear the consequences of design decisions and must make the system work regardless o f its
designs”(p.4l).
The sociotechnical systems approach provides a way of achieving the joint
optimizing of both the technical and social systems within an organization. Several writers
have proposed and used this approach to design information system with great success (e.g.,
Chisholm, 1988; Hogan, 1993; Sharma et al., 1991; Shani & Sena, 1994; Purser, 1991;
Terlage, 1994). For example, Chisholm (1988) stated
The advanced information technology requires new strategies...and different organizational and workplace designs that emphasize the human attributes of learning, questioning, and deciding to reach the technology’s potential for contributing to organization effectiveness...The sociotechnical systems (STS) approach provides an effective way of working to improve total system performance through improved links between the human system and technology, (p.45)
Consequently, the STS approach is an effective and efficient design method to
follow in designing a successful information system in terms of fulfilling not only the needs
of the technical and social subsystems but also creating a flexible and adaptive information
system that responds to the environment.
Third, another contribution of this study is the development o f a survey instrument
that could be used as a foundation for future research in the public sector, which could
include any of the variables investigated in this study. This instrument has been validated
through rigorous process. This process included a comprehensive review of available
instruments in both the public and private sectors, back and forth translation process, pilot
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
134
study, consulting with experts in information system and public administration, factor
analysis, and test of internal consistency.
A fourth contribution of this study is testing a model, instruments, and a research
process that are based on prior research in the United States in a Middle Eastern country
(Kuwait). This study has reached similar findings to that found in the United States in terms
of the relationships in the study model and the results of the instruments validation. Thus,
the external validity of the model, concepts and instruments was enhanced by this study.
Future Research Directions/Suggestions
Several avenues of future research are suggested by the findings of this study. First,
while this study has provided much needed empirical support for the three variables model
of information system success, broader empirical support for this model is necessary and
needed. Thus, future research should test the applicability of this model in different types of
public organizations (e.g., non-profit organizations) and other societies.
Second, as stated through this study, this study represents a first step in developing a
comprehensive model for evaluating information systems in the public sector. Thus, a
logical extension of this study is to add external actors into the three variables model of
evaluating information systems. Equations in Chapter three could be used as the basic
hypotheses for this future research.
Third, because this study employed quantitative methods and only questionnaires to
collect data, future research should employ also qualitative methods. For example, actual
observation of users of information systems or interviewing these users may give valuable
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
135
insights regarding their satisfaction with these systems rather than just asking them
questions about their perceived satisfaction. Likewise, reviewing secondary data such as
individuals and overall organizational productivity reports could provide additional insights
regarding the individual impact and organizational impact variables. In other words, the
study would greatly benefit from some type of triangulation in the data collection method.
Fourth, findings of this study did not show positive associations or relationships
between System Usage and other variables in the model. As stated in this study, a possible
reason for this could be the measure used in this study to test for System Usage. This
measure was adopted from Igbaria et al. (1989). This measure relies more heavily on two
dimensions. First, the numbers of organizational function those information systems are
used in; second, the numbers of software package used in work related responsibilities.
During the pilot study and consultation process, several participants and Kuwaiti professors
have indicated that information systems have been recently introduced in their organizations
and these systems are not widely used in all work related tasks. Thus, this could be the
reason for the low usage of information systems and lack of associations of this variable to
other variables in the study model; which led, in turn, to dropping System Usage from the
study model. Thus, future research should use other measures of System Usage that do not
have such reliance (e.g., Kim & Lee, 1986; Sherman, 1997). This might lead to including
System Usage into the model.
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
136
BIBLIOGRAPHY
Al-Jannaee, A. (1989). An investigation of leadership style and its effect upon
employee motivation and satisfaction with supervisors in public and private organizations in
Kuwait. Unpublished doctoral dissertation. University of Denver, Denver, CO.
Anakwe, U. P., Anandaeajan, M., & Igbaria, M. (1998). Information technology
usage dynamic in Nigeria: An empirical study. Journal of Global Information Management,
7 (2), 13-21.
Ang, J. & Soh, P. H. (1997, October). User information satisfaction, job satisfaction
and computer background: An exploratory study. Information & Management, 32 (5), 255-
266.
Anonymous. (2001). Facts and figures (Kuwait). In LEXIS.NEXIS Academic
2. How do you evaluate the ease or difficulty of utilizing the capability of the computer system?Simple: 1...2...3...4...5.. .6 . ..7: Complex Easy-to-use: 1...2...3...4...5.. .6 . ..7: Hard-to-use
3. How do you evaluate the set of vocabulary, syntax, and grammatical rules used to interact with the computer system?
4. How do you evaluate the relative balance between the cost and the considered usefulness of the computer- based information products or services that are provided? The costs include any costs related to providing the resource, including money, time, manpower, and opportunity. The usefulness includes any benefits that the user believes to be derived from the support.
1. How do you evaluate the correctness of output information?High: 1...2...3...4...5.. .6 . ..7: LowSufficient: 1...2...3...4...5...6...7: Insufficient
2. How do evaluate the availability of the output information at a time suitable for its use?Timely: 1...2...3...4...5...6...7: UntimelyConsistent: 1...2...3...4...5...6...7: Inconsistent
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
159
3. How do evaluate the variability of the output information from that which it purports to measure?Consistent: 1...2...3...4...5...6...7: Inconsistent
High: 1...2...3...4...5...6.. 7: Low
4. How do evaluate the consistency and dependability of the output information?High: 1...2...3...4...5...6...7: Low Sufficient: 1...2...3...4...5...6...7: Insufficient
5. How do evaluate the age of the output information?Good: 1...2...3...4...5...6...7: Bad Adequate: 1...2...3...4...5...6...7: Inadequate
6. How do evaluate the comprehensiveness of the output content?Complete: 1...2...3...4...5...6...7: incomplete Sufficient: 1...2...3...4...5...6...7: Insufficient
7. How do evaluate the material design of the layout and display of the output contents?Good: 1...2...3.. 4...5...6...7: Bad Readable: 1...2.. 3...4...5...6...7: Unreadable
8. How do evaluate the amount of information conveyed to you from the computer-based systems?Concise: 1...2...3...4...5.. .6 . ..7: Redundant Necessary: 1...2...3...4...5.. .6 . ..7: Unnecessary
9. How do evaluate the degree of congruence between what you want or require and what is provided by the information products and services?
Section III: Please circle the most appropriate answer that describes your usage of the information system.
1. On average working day that you use a computer, how much time do you spend on the system?(1) Almost never (2) less than Vi hour(3) From Vi hour to 1 hour (4) 1-2 hours(5) 2-3 hours (6) more than 3 hours
2. On the average, how frequently do you use a computer?(1) Almost never (2) Once a month(3) A few times a month (4) A few times a week(5) A bout once a day (6) Several times a day
3. With respect to the requirements of your current job, please indicate to what extent do you use the computer to perform the following tasks: (Please Circle one)
Not at all To a great extent1 2 3 4 5
1. Historical reference1 2 3 4 5
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
160
2. Looking for trend historical reference1 2
3. Finding problems/alternatives1 2
4. Planning1 2
5. Budgeting1 2
6. Communicating with others1 2
7. Controlling and guiding activities
1 2
8. Making decisions
4. With respect to the requirements of your current job, please indicate the number of packages you use from the following: (Please Check)
Section V: Please indicate the extent to which information systems has impacted your job in the following:
1 2 3 4 5Not at All A Little Moderately Much Great Deal
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
161
Task productivity:(1) Information system allows me to accomplish more work than would otherwise be possible.
1 2 3 4 5(2) Information system increases my productivity.
1 2 3 4 5(3) Information system application saves my time.
1 2 3 4 5Task innovation:
(4) Information system helps me try out innovate ideas.1 2 3 4 5
Customer satisfaction:(5) Information system helps me meet customer needs.
1 2 3 4 5(6) Information system improves customer satisfaction.
1 2 3 4 5(7) Information system improves customer service.
1 2 3 4 5Management control:
(8) Information system helps the management to control the work process.1 2 3 4 5
(9) Information system improves management control.1 2 3 4 5
(10) Information system helps management control performance.1 2 3 4 5
Section VI: Demographic Information.1. Your job title........................................................................................................2. How long you have been in this position?...............(Years) (Months)3. Your gender: ( ) Male ( ) Female4. Your age: ( ) Less than 20 ( ) 20 to 29 ( ) 30 to 39 ( ) 40 to 49 ( ) More than 495. Your education: ( ) Less than high school ( ) High school ( ) High school and some college (
) Bachelor ( ) Master ( ) Doctorate6. Are you: ( ) Kuwaiti ( ) Non Kuwaiti7. How many years you have been working in government?
( ) Less than one years ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years( )21-25 years ( ) Over 26 years
8. How many years you have been working in your current agency?( ) Less than one year ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years (
)21-25 years ( ) Over 26 years9. How long you have been working with information system (Computer)? ....(Years)....(Months)
End of Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
162
Appendix B
English Version of the Management Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
163
Section I: please indicate the extent to which information a system has helped your institution in the following:________________________________________________1 2 3 4 5 6 7Not Much Extensively
(1) Distinguishing your institution from similar institution.
Not Much: 1...2...3...4...5...6...7: Extensively
(2) Reducing administrative costs.
Not much: 1...2...3...4...5...6...7: Extensively
(3) Improving the efficiency of internal operations.
Not much: 1...2...3...4...5...6...7: Extensively
(4) Enhancing the institution's reputation.
Not much: 1...2...3...4...5...6...7: Extensively
(5) Enhancing communication with other organizations.
Not much: 1...2...3...4...5...6...7: Extensively
(6) Enhancing and improving coordination with other organizations.
Not much: 1...2...3...4...5...6...7: Extensively
(7) Improving decision making.
Not much: 1...2...3...4...5...6...7: Extensively
(8) Making the institution successful overall.
Not much: 1...2...3...4...5...6...7: Extensively
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
164
Section II: Demographic Information.1. Your job title.......................................................................................................2. How long you have been in this position?.............. (Years)........... (Months)3. Your gender: ( ) Male ( ) Female4. Your age: ( ) Less than 20 ( ) 20 to 29 ( ) 30 to 39 ( ) 40 to 49 ( ) More than 495. Your education: ( ) Less than high school ( ) High school ( ) High school and some college
Bachelor ( ) Master ( ) Doctorate6. Are you: ( ) Kuwaiti ( ) Non Kuwaiti7. How many years you have been working in government?
( ) Less than one year ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years( )21-25 years ( ) Over 26 years
8. How many years you have been working in your current agency?( ) Less than one years ( ) 1-5 years ( ) 6-10 years ( ) 11-15 ( ) 16-20 years
( )21-25 years ( ) Over 26 years9. How long you have been working with information system (Computer)? ... .(Years)... .(Months)
End of Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
165
Appendix C
Letter of Approval from the Human Subjects Committee at Pennsylvania State University
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
pennStateV ice lY o i i le n i fu r R e se a rc h
O f fic e fu r R eg u la to ry C o m p lia n c e
T h e P e n n sy lv a n ia S ta te U n iv ers ity
2 1 2 K e m G ra d u a te H uiU ling U n iv e rs ity P a rk . PA IMMl2*.VUll
4 K14 1 X A S4775F a v iH U lX f iJ - W iW w a w .rc s c a r t h. psu .eduA ire /
From:
Dale:f...ekel. Dirccjcfrol Regulatory Allairs
llclaicl M. I*. Alnuilairi
Subject: Results of Review of Proposal - Expedited (O RC #001)0461*00)
A pproval E xniration Date: A pril 21,2001
“Evaluating Information System Success in Public Organizations: The Seven Dimensions Model”
The Behavioral and Social Sciences Committee of the Institutional Review Board has reviewed and approvedyour proposal for use of human subjects in your research. This approval has been granted fo r a one-year period.
Approval for use o f human subjects in this research is given for a period covering one year from today. If your study extends beyond this approval period, you m ust con tac t this office to request an annual review o f this research .
Subjects must receive a copy of any informed consent documentation that was submitted to the Compliance Office for review.
By accepting this decision you agree to notify the C om pliance Office o f (1) any additions o r procedural changes th a t modify the subjects' risks in any way and (2) any unan tic ipated sub ject events th a t a rc encountered du rin g the conduct o f this research. P rio r approval m ust be obtained fo r any planned changes to the approved protocol. U nanticipated subject events m ust be reported in a timely fashion.
On behalf o f the committee and the University, I thank you for your efforts to conduct your research in compliance with the federal regulations that have been established for the protection of human subjects.
C A Y /jhn
cc: K. EnglishR. Chrisholm S. Peterson H. Sachs
An Lquul Opjswtumiy I'mscrsiiy
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
167
Appendix D
Letters of Approval from Participating Ministries
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
KUWAIT UNIVERSITYCollett of Administrative Sciences
4 m V>
r.j:. ,.t < tt-.ui i . .» ac=d*r3»tihth
Omce of the DeansllUXlvtl' V/*1, t-
_ 2 ^ _ j UK2Wi K j>u jiijru:1000/6/13 \^ ix
' J-J—i- • Jjt$ll(a2jt j l l S jI j j >■■■!! i
<(( J u a t^ J e ^ ', y • • •
iLa yjkt’ Jum — JJ* / Jujl Qjy
^ Jlj ,.ljj 2j_f> ^C- — I <ij\}}\ — i-U p—il
S— Ij J < J L k -U li ^ » U I o f y j J w.U-11 L S '. s p ^ i ^ j i ) ,
^JuJI hlotJI | Jy—«■•>•
> > -
'j jy j i S j '
("*'< v tr
<S j& J <- * r T > s
l J .V > ^ ■ ^ £ [<> - '
. . . V \ V ' ' : a h cT • • ' • T T 5 5 \ a n j r r a \ 5 \ j U Cc - ji l l H 05S ; u - j l *163 T»TAtVVuJiU.T- • 5 ^ .U t • TT5M JlUjTt a \ • V ■ X OjiE
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
MINISTRY OF INTERIOR iJU-IjJl SjljjGmmwl A4mU*nttm DtpartoMt U U il ijb ? l
D W ii .T*** ^ I V
RaCNa. :C<jU'. v o v ^ N o
f j 2 * l l ^ aU JI m J « ^ U J I ^ > m JI
MLa|J«a|l 0 .«.»•» J*-k jl jf'—fa i i » i all •* ; ^“J1 Jil ih ijl ^
^j^ll Irfjljll 11 jH Uiti I La_iLy e ltil ^LrikHlfl ^ I t UAI^lf | C t U iM J i j l l j | l^ iU jll* iT i ■ % / \ / \ X
• Ce^^b I* i>* «-"■* u1- u 5J*
^1* u* ■‘■‘ t/J* jp b ^ b U»U»1flj ^UJLf J .A I *JI 1*1,Ui1 ^ L J I 4^l^2 ^ iJ I o U i^ J I U l i J - J i i j (JCSjU
. i-j-uai i-ij^ut«<«« >ilj J>*lf IjlUiTj
fLt<-j IjVI J j^ L I UUJI ijbVI( •.i| . „ n . i i : r i r *)
• O l U ^ i J l ^ u t y l S f J I L L a i i i k U l j i l
■iklU^a i*il *> *fM jy(j*a ll M * * * / ! * 1* 1
. JU—Ull/Ui-i /u“ ■
Opr: 4144173 - 4M4613 - FU.4M2697 IAmAV:i r *U .lA U W Y .tA tt’l \ r J1UJI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission
MINISTRY OFJNTERIOR
RtCNo. —
SJL>UJI ijfjj l » i '^ L U iiip iijb y i
' fM ofttTY '.' '
■••• c*' 'S' cp.-.
fJ3ai U . p 4 J I I j l l ^ U I I i j b f i f L• ^ J u . J - f - J I : I
mU #J m|I *■!»» » » i i J#-* jUffSall dl «»m |I
s wl
i , « t j »,■ f j l * II U S h a . g ^ ll i > «!■f ' t t t f i l ty e ltil jjljj* »lll ^t>«5 U l l j i l ( U « U m i j A j M j ) *jJI* i lA f t« » > /y \T ■
• j c ^ L J i j i l j C e J j i v A l J + O -* ,j t- J I 13+*
« i* tP J U p i . . -. ..VI Ilk t i i j i J i i y i j U L » * lj ,0*11* J .1 A -JI ^ ' i i ' ' < J * * k 2 .U il . i~ L * J I ^All ^ L + J U J I U l i J - J H j ^ S i j U l •
. U ^IU ll U O j+ U
MIM t5< U*Oll > 'j Jj*** ljl..Ai«j
I UUil JjWI
■ *)
• ts iu ^ c« fjii yUfli Uwi i atf lUjii
; .A ^i« ||/U i-i- M /u»
Opr.: 4144172 • 4444613 •Fu:4442697 l A t T I ^ V l p - f U . t A U W T . t A t t l i r DUJI
< • /
with permission o, the copyright owner. Former reproduction prohibited without permission
MINISTRY OF INTERIOR -u W -U l S jljjGmmi ArfmlitontlMi D m I i-U )l Sjb>l
Data: !••• jW T Y
a ^ \ o r ^ e v
f> a i t j »«u i—uji ijijVi fu j- j- j ^ uji jh*_ji
fa? n jji*S JfA >fiijti n ■! jl i « y i ^ 1 al
| | j ^ l j Ij VI ^^kJI ^ U I m c w jU I L jk a « LjjpW* Uil^M ( Ca } <JU ^UmT a a a/^/\T
a ijlj>JL igjj*inll (^1 4 Ijl ^1* yUJI li+f
jut J k j\ Jijt-* «afi IX* c-j>: ^ J iljiij u u v i j (OftiL j* m ii u4A.jiiiLri #Im I ^ L J I i ^ l j ^ 1*11 i h l ^ i J I ^ t i t J J t g
a 1 - j J k i l i-.lj.O JIIIM >*J J j - * - W'Aitj
fU j - J - ^
o jL y i j j^ u j i-u jiijL v i
h O z z 'j-* >
. t i l U j . . , C U . j j i l v l i £ J I > • U _ J : d . U * ^ l l
• aStlUa JjJ - Al i Ifl*#4 /. i _ _ L J I / l_ i— i
O pr-4144172• 4144613• Fu:4t42697 l A t m V ^ l i . t A U W T . I A t f W r illxJI
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
MINISTRY OF INTERIOR iJU-IjJl i j l j j
OiiitBetNe. T*** T V
* N f \ U rJerJt
f*j—J*1 j j j — U 1— UJI SjIjVI f U j - j - J^bliJI » : - »
• ^>*lll Mf#>i ; » » B J#-* | | ■ ! <|l
Jl J -
Cj^ 1 Ifl '■ < 1- - 1S Ui~U . U J I C ,U t S ~ i l U i l j U ( < i » U ^ i J M ' j ) * ^ u y U f T . . . / V ' T
* C g l j * n « I I £ l 2 J I l i f d
y ^ » J i j l l u U - * 5 — V I I i - C i j j i J i l j i l j l i L - V l j f U l l . J . ^ i - . n 'S + J * I J J
~ .LSI ^ u j i «-.!>; ^ u i a U x j i u u J - J i 5 j f iS jW. L * L a i U j j i i
• l l l l | £ | l . ‘ » '» h J j ^ Ij l . i f i j
fU J - J - . /v
( ^ 7 3 7 1 ^ 0 U U I ijLVI
4 * M j & <& /■& >
• U i l > - i l y U i j l l * t M ! i l l l A i j l l
y j J • J i t l i j J I j y i j A i J I
.Jk_UJI/Ui-Af-t A*“ • -*
V .____________________________________JOpt.:4144172• 4144(1)«Fu:4l42697 l A m W . ^ l i . i A t l WT . l A l f W r 3LU»
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
KmnttUntanttjC*MAtt*Mwld«M
«2y^Ol3Le«l^1 :**
in JweLwlwSLr • • *
Ca jjt* Juu — Jail Ji* / JmII irfiil otLlaaKt gjj J* &IjU UJ<
^ ^ l i , J s v - f . j * - ^ - > J I U .W : ^ j b > \ p U l S j T - i - U I i j b V l ,— 1
*rjjaiJ055Wi-l\«UAy.w.-T»TA»VYkrtU.r--\^iUT«TTM\yLf.T»\-\-\i >CT « l . 3510101 • O p e r a t o r 2523911 E * » . 300t • F i x . 2528477. P 0 lint 118/S I W « K m * * . /
Reproduced w » pennies,on C - c o p ^ n , owned Rudder re p ro d u c e p r o v e d wddou, penn iss io .
KUWAIT UNIVBBSnY'm m t
dgleM iaieCy '•
r-.> 1
b
f
' ' • ' H*
^ 0 1* *J4* “ «->$$* “ '**' UV* p-Jj
U j i ^ J t * *u i^ .u i< »i; ^ t* u -H iiS 'l i:.Hr; l^.
.u u ir
• •£*
o-** ( ^
■’1'in‘ff
■ T : v - ; $ g
. !
•.;*. /" ' *v’
A'U&SK-, i l U f A « * * f f .t
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
( o l ..................( j.j^ * '1) ••••••• ^ ^^—Al ld> J yjJT a jll . T
^ ' ( ) / H ) : ^ M . r
u j * j & 0 ^ Ji*- ( ) ™ ( ) ^ JlY- ( ) T . ^ J i t ( )
jO~~PrL* ( ) ( ) I j * 1* J i» lp 4j ) o lp ij^ ili ( ^ i* tp 4j j j I i lj* J jl ( i l l i - l . ®
•» J J ^ ( )
J* ^
^U adl J iLOiJ-l 5JLP . Y
io- Y • — \ "\ ( ) io< o — ( ) o lji-i \ • —Y ( ) OjI Ot o— \ ( ) a-l»-lj <i-< Jil ( )
4 i - T 1 j A > 1 ( ) Y » - Y \ ( )
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
SSJU-I jJ} O t 3JLP qS .A
3L- T . - \ * \ ( ) *i-< \ o - \ \ ( ) o l > - > . - 1 ( ) o l j i - o - \ ( ) iw. Jif ( )
To- T N O
(oly-<)............. ( oL»y*ii 4-Jajti ^
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
193
Appendix G
Arabic Version of the Management Questionnaire
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
o l * ^ U s i jjJtl Jju ^Jl J_j*" » jS* f ’d j ^ j» «~*il
- & M J * L*>-s7«p»
V
IJbr j j £ jv jIj
a i r Y \
jafc
V *1 a
• i £ o l j b ^ l j p k ilJ jb l j - J . \
i r y ^
Y 1 0
• S ijb ^ l » j j l£ d l Jai* .Y
i r Y ^
V n 0
.JuU U ll i i jb ^ l oLJLuJl oaUT j w i .T
i r y \
Y \ 0
.3jb ^ ll 4>y& j\ jtjiu . i
i r y \
V *1 a
. ( j o ljb 'y i jfc* jL a J ^ I ~*iyj -a
i r r \
Y 1 a
• lS ol j b^f l £ • J --o ]l jy ~ ^ j 1
i r y ^
Y 1 a
*U «
* r y ^
.flp Sjb^l J **1 *A
v i o i r r ^
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
.<ulp o l > t ^ j l i l l |» ..ill
^iJijJl j> U . \
.................. (jj^«i)..............^^i-ls^ll ^».«.ll l-i® Jj l «g«<a> JT sjll . Y
^ ' ( ) / * ( ) \ s j M . r
^ ^ J l * - O ^ J i r - ( ) t ^ J i t • o T . ^ j i f o r ^ p T . i
^ O ( ) J-$*'* 3 i *p *0^ ( ) i“*p * i y ^ J * J*' ( ) :4.-.l«:li iJU-i. o
*\jy£ J ( )
Y lftjt ^Ux«]l J} O jji-I O ly~* iJS> • V
40 T • — 1 ( ) 40 ® ~ ^ ( ) o l _ j o • —*\ ( ) o l j o o — ( ) aJi>-l j 4 0 ^ j A J il ( )
i o Y I ^ ^ O T » - T I ( )
?4«J|J«I »jl® ^fl J | 44.X i-l O l ^ O JJLP i ^ . A
40 T • — \ ( ) 40 \ O — ^ ( ) O ljO • — 1 (') OIjO o — ( ) 0-lj-lj 4 0 ^ Jil ( )
i o T 1 ^ > r i ( ) T 0 - T \ ( )
( o l j o ) ................ ( J34~“) ...............^ oL » ji* ll 4»laiL J o u ^JT ijd l ^ ^
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
1%
Appendix H
English Version of the Cover Letter
Reproduced with permission of the copyright owner. Further reproduction prohibited without permission.
197
Pennsylvanian State Universitv-HarrisbureSchool of Public Affairs
777 West Harrisburg Pike Middletown, PA 17057
Tel: (717) 948-6050 Fax: (717) 948-6320
Dear Sir/Madam
How do you know whether the information system in your organization is successful or not? What measures of success do you use to measure that success of your information systems? And are your sure that the measures you use are the most appropriate one?
As my dissertation research, 1 am investigating the issue of how to evaluate information systems in the public sector and thus attempting the answer the above questions. Conducting the study in your organization has been approved by the top management.
The enclosed survey is designed to gather information about the various measures that are used to evaluate information systems success. The collected information will play a major role in developing a comprehensive model for evaluating information systems in the public sector.
Consequently, your participation in this study is essential for its success in developing the comprehensive model. Participation in this study is voluntary. If you decide to participate in this study, please carefully read the instructions in each section and answer all questions without discussing with anyone. There are no right or wrong answers to these questions. Usually, it is your first reaction to a question is a good indication of how you feel. Mark the response that best indicates your reaction, and do not spend too much time on any one item. After completing the survey, please return it back to me. Completing the survey will take 15-25 minutes.
Your answer will be kept strictly confidential. No one other than me will be allowed to have access to your answer. Individual responses will be anonymous. The data will be aggregated and analyzed only on group basis.
The study’s findings will be provided to you upon request. If you have any questions or comments, please feel free to ask me. In advance, thank you for your participation in this study.