Top Banner
Info Systems J (2005) 15, 61–82 © 2005 Blackwell Publishing Ltd 61 Blackwell Science, LtdOxford, UKISJInformation Systems Journal1350-1917Blackwell Publishing Ltd, 200415 16182Original ArticleEvaluating e-governmentZ Irani et al. Evaluating e-government: learning from the experiences of two UK local authorities Zahir Irani*, Peter E.D. Love , Tony Elliman , Steve Jones § & Marinos Themistocleous Department of Information Systems and Computing, Information Systems Evaluation and Integration Group (ISEing), Brunel University, Uxbridge, Middlesex UB8 3PH, UK, email: *[email protected], [email protected] and Marinos. [email protected], We-B Research Centre, School of Management Information Systems, Edith Cowan University, Joondalup, WA 6027, Australia, email: [email protected], and § Conwy County Borough Council, Bodlondeb, Conwy LL32 8 DU, UK, email: [email protected] Abstract. Part of the remit of public sector management includes planning and reflecting on capital expenditure on new technology. With this in mind, the role that information systems play in supporting improvements in e-government service delivery to stakeholder groups continues to attract much attention. The authors of this paper seek to define the scope and role that information systems evaluation plays within the public sector. In particular, the authors assess whether public sec- tor organizations might benefit from the use of established ex-ante evaluation tech- niques, when applied to analyse the impact of e-government information systems. Following a comprehensive review of the normative literature, an initial conceptual framework for public sector information systems evaluation is proposed, which is then empirically explored within two local government authorities. The conceptual framework is then revised by using the structured case approach, which is depen- dent on an iterative research cycle where triangulated data are elicited. This then supports the emergence of new concepts during each research cycle that leads to the view that information systems evaluation in the public sector is a process of experiential and subjective judgement, which is grounded in opinion and world views. This leads the authors to challenge the appropriateness of traditional modes of investment appraisal when applied in the public sector. The finalized framework embraces investment decisions, evaluation methods, culture and struc- ture, as well as post hoc evaluation. It emphasizes the importance of situated, interpretive user assessments in evaluating e-government investments. Keywords: e-government evaluation, UK public sector, structured case method, culture, investment decision-making, evaluations methods
22

Evaluating e-government: learning from the experiences of two UK local authorities

Apr 28, 2023

Download

Documents

Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Evaluating e-government: learning from the experiences of two UK local authorities

Info Systems J

(2005)

15

, 61–82

© 2005 Blackwell Publishing Ltd

61

Blackwell Science, LtdOxford, UKISJInformation Systems Journal1350-1917Blackwell Publishing Ltd, 200415

16182

Original Article

Evaluating e-governmentZ Irani et al.

Evaluating e-government: learning from the experiences of two UK local authorities

Zahir Irani*, Peter E.D. Love

, Tony Elliman

, Steve Jones

§

&Marinos Themistocleous

Department of Information Systems and Computing, Information Systems Evaluation and Integration Group (ISEing), Brunel University, Uxbridge, Middlesex UB8 3PH, UK, email: *[email protected],

[email protected] and

Marinos. [email protected],

We-B Research Centre, School of Management Information Systems, Edith Cowan University, Joondalup, WA 6027, Australia, email: [email protected], and

§

Conwy County Borough Council, Bodlondeb, Conwy LL32 8 DU, UK, email: [email protected]

Abstract.

Part of the remit of public sector management includes planning andreflecting on capital expenditure on new technology. With this in mind, the role thatinformation systems play in supporting improvements in e-government servicedelivery to stakeholder groups continues to attract much attention. The authors ofthis paper seek to define the scope and role that information systems evaluationplays within the public sector. In particular, the authors assess whether public sec-tor organizations might benefit from the use of established ex-ante evaluation tech-niques, when applied to analyse the impact of e-government information systems.Following a comprehensive review of the normative literature, an initial conceptualframework for public sector information systems evaluation is proposed, which isthen empirically explored within two local government authorities. The conceptualframework is then revised by using the structured case approach, which is depen-dent on an iterative research cycle where triangulated data are elicited. This thensupports the emergence of new concepts during each research cycle that leads tothe view that information systems evaluation in the public sector is a process ofexperiential and subjective judgement, which is grounded in opinion and worldviews. This leads the authors to challenge the appropriateness of traditionalmodes of investment appraisal when applied in the public sector. The finalizedframework embraces investment decisions, evaluation methods, culture and struc-ture, as well as post hoc evaluation. It emphasizes the importance of situated,interpretive user assessments in evaluating e-government investments.

Keywords:

e-government evaluation, UK public sector, structured case method,culture, investment decision-making, evaluations methods

Page 2: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

62

INTRODUCTION

Electronic government (e-government) encompasses a wide range of services:

disseminationof information

,

commerce with the private sector

,

services to individual citizens and busi-nesses

, and

participatory democracy

. The motivation by local and central governments toreduce administrative and operational costs, as well as enhancing the services they offer tobusinesses, citizens and the general community at large, has been a driving force for the devel-opment and implementation of an e-government infrastructure within the United Kingdom (UK).In the United States, Chenery (2001) discusses the financial motivation for e-government, esti-mating that for every government transaction that can be moved online, a potential cost savingof $400 can be achieved. In general, the move towards the development of an e-governmentinfrastructure highlights the scope for long-term savings and improved service quality levels,which could be achieved by the public sector through electronic delivery of services to itscitizenry.

In the mid-1990s, government agencies attempted to publish strategies that could takeadvantage of the capabilities offered through the internet. However, the information and policystrategies that were created were not fully understood by those who were supposed to imple-ment them. Evidence to support this emerged during this period, where Horrocks & Hambley(1998) reported that the number of UK local government web sites rose from 40 in 1995 to 300in 1998. This figure has since spiralled to a position where the number of government web sitesthat now exist in the UK remains unknown. The proliferation of government web sites hasengendered debates about

form

(Davies & Chalk, 1996),

quality

(Eschenfelder

et al

., 1997)and

policy

issues (McMullen, 2000). Early web sites were static and focused on the dissem-ination of information, yet political ambitions raced ahead with a commitment to deliver 25% ofgovernment services over the internet by the end of 2002 (DTI, 1998) and 100% by the end of2008 (Cabinet Office, 1999). Essentially, the internet was viewed as being a vehicle for gov-ernment and citizen interaction (Steyaert, 2000), and a new participatory democracy. This pro-cess was accelerated in 1999 through an initiative by the European Commission that was setup to bring the benefits of the information society within reach of all Europeans, regardless ofthe widening of the European Union (EU) community. Target dates have since been reeled in,with the UK Government revising its target for 100% access by the end of 2005 (Cabinet Office,2000).

There are almost 500 local authorities (with additional bodies responsible for services, suchas policing, fire services and passenger transport) within the UK, all of which are required toimplement online systems in response to central government policy by 2005. This requirementhas placed a considerable amount of pressure on local authorities’ Information Technology (IT)service departments, as they try to cope with the sheer magnitude of services that have to beoffered online. The focus on completing the task at hand has meant that little consideration hasbeen given to strategically evaluating their information systems and possible alternative sce-narios. Gronlund (2000) observes that the development of e-government infrastructures havebeen typically ‘crisis driven’, with little concern for the effect on organizational structure orunderstanding of benefits to citizens. Perhaps, the most serious indication of this dilemma is

Page 3: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

63

the production of a consultation document addressing local e-government strategy (DTLR,2002) a year after local governments were required to submit implementation plans.

The implementation of a broadband e-government infrastructure is expected to bring radicalchange in the way in which businesses and citizens communicate and interact with one another.In Korea, for example, Choudrie & Lee (2004) found that the use of broadband within govern-ment departments and agencies has had a catalytic impact on the quality of public services,and encouraged previously bureaucratic organizations to re-engineer the way services aredelivered to citizens. However, the nature of such change and resulting beneficiaries are notclearly understood and difficult to predict from both a social and technical perspective, largelybecause of ethic mix, cultural diversity and sensitive political motivations. Some central gov-ernment guidance on cost-benefit analysis for prioritization has been forthcoming in the UK(Cabinet Office, 2000), but this has tended to be financially based. Experience in the com-mercial sector points to the inadequacies of this approach when applied to the evaluation ofinformation systems (Irani & Love, 2001). At least in its commercial interactions (e.g. tenderingand procurement) local government can learn directly from e-business and thus capitalize onthe experience of the private sector. In other areas, the perception of value in public service pre-sents a different paradigm, with local governments needing to benefit from normative researchand seeking to understand the costs, benefits and risks associated with information systemsin the public sector (Bannister & Lalor, 2001; Irani

et al

., 2004). Even in the simple provision ofinformation, the creation of a web site can distort local government administrative structuresand requires close ties between web designers and service management (Huang & Chao,2001). Adding no more interaction than an email enquiry point can create new demands on ser-vices (Mon, 2000) and require a need for finite resources. Given these competing demands tosupply information through multiple service delivery channels, new strategic challenges areclearly presented for local authorities if they are to realize internal and external government tar-gets. A review by Oates (2002) of several local government web sites addressing issues aroundthe management of the

foot and mouth

epidemic in the UK showed considerable variation inthe scope and quality of material presented on the web, and that a positive impact for web infor-mation services is not guaranteed, and often a result of poor

ex-ante

evaluation (Hall, 1998).Central government evaluation of e-government systems is either volumetric (as in the e-

Europe benchmarking indicators: EU 2000 and EU 2001) or based on case study coverage ofgood practice (see for example SOCITM, 2002). However, evaluations need to address thenotion of benefit to the citizen through the provision of infrastructure. It is here that a funda-mental difference exists between public and private sector evaluation. Public sector managerscan turn to a variety of local government sources for guidance. Bodies such as the Society forIT Managers (SOCITM), with its Information Age Government Group, have accumulated a sig-nificant amount of literature over the past few years. With this in mind, the authors of this paperseek to addresses a relative void, by presenting the evaluation experiences of two UK localauthorities that sought to embrace e-government. The experiences reported provide a learningopportunity for other local authorities in the UK as well as the wider international community.This is realized by demonstrating inherent differences that exist between private sector eval-uation and more rarely reported examples in the public sector.

Page 4: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

64

INFORMATION

SYSTEMS

EVALUATION

Lefley & Sarkis (1997) argue that investment justification processes used by management aretypically based on the use of traditional appraisal techniques, which are largely inadequate forstrategic decision-making. Such techniques, even in the public sector, remain limited in use andscope (Jones & Hughes, 2001). According to Irani

et al

. (2001), managers tend to be myopicwhen considering information system (IS) investment decisions, primarily because they do nothave a sufficiently robust framework by which to evaluate the benefits and costs of such invest-ments. Moreover, managers tend to give little or no attention to the ‘hidden’ or indirect costs sur-rounding IS adoption, which may be up to four times greater than the ‘direct’ IT cost component(Hochstrasser, 1992). The implications of ignoring these ‘indirect’ costs may lead to ‘cost-creep’ or project failure over the long term. The reason for this is that many companies onlyrealize the significance of these additional cost factors once the project has been initiated. Asa result, it is often too late to stop the investment because of its momentum and political pres-sure (internal and external). Much of the momentum for e-government IS projects comes fromconsidering the portfolio of benefits to their numerous stakeholders, be they strategic, tacticalor operational, with the nature of the benefits being quantitative, qualitative or intangible (Irani

et al

., 2001). Poor decision-making often does not consider the wide-ranging portfolio of costsand benefits, which can often lead to significant financial losses. In turn, this can translate intoa loss of jobs through cost cutting, which has a clear political dimension for both local and cen-tral governments. Interestingly, in both the public and private sector, the costs associated withsuch losses are invariably passed on to the customer through increases in product/serviceprices (private sector) or rises in local/central government taxes (public sector).

The evaluation literature is replete with attempts to surmount the theoretical and practicalproblems of IS investment evaluation (Irani & Love, 2002). Traditional investment appraisaltechniques, such as

Return on Investment

,

Internal Rate of Return

,

Net Present Value

and

Payback

approaches, are the most commonly used methods to evaluate IS investments in thepublic and private sector (Ballantine & Stray, 1999). Such techniques are typically based onconventional accountancy frameworks, and often facilitated under the auspices of a financedirector. They are specifically designed to assess the ‘bottom-line’ financial impact of invest-ments, by setting ‘direct’ project costs against quantifiable benefits achievable. However, asmore organizations realize that such techniques are unable to accommodate the

full

range ofbenefits, costs and risks, many public and private sector organizations are left with the quan-dary of deciding which approach to use, if any. Consequently, there has been much debateabout the types of techniques that constitute meaningful justification. The inability of manyorganizations to quantify the ‘full’ implications of their investments in new technology, from acost-benefit and risk perspective, puts in question the predictive value of those justification pro-cesses that are dependent on traditional appraisal techniques.

Even when traditional appraisal methods are applied rigorously, their relevance in the publicsector domain is open to question (Bannister, 2001; Jones & Hughes, 2001). The reason forthis is that standard economic and business measures of increased throughput, productivitygains, financial payback and return of capital used, are relatively easy to define in a manu-

Page 5: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

65

facturing environment but have little meaning in public administration. Indeed, the concept ofValue for Money (VFM), which has been advocated as the most appropriate model for the pub-lic sector, is considered as having met with limited success. This is especially the case with ISprojects, because of the complexity of determining VFM and the difficulty in defining IS projectsuccess (Bannister, 2001). Accordingly, Margetts & Willcocks (1993) reported that IS invest-ments are not a proven cost cutter in the public sector, as the costs of implementation often out-weigh cost- and efficiency savings. In highlighting the importance of opinion rather than metricsassociated with IS evaluation, Heeks (1999, p. 44) states that to understand IS evaluation inpublic sector organizations, ‘one would be better advised to understand the “wetware” betweenpeople’s ears rather than evaluating hardware and software costs’. A constant thread extractedfrom the evaluation literature over recent years claims that most organizations have no man-agement processes to govern and measure the achievement of desired outcomes, neither dothey have processes to establish what benefits were actually achieved (Farbey

et al

., 1993;Willcocks & Lester, 1994; Smithson & Hirschheim, 1998; Remenyi

et al

., 2000; Irani & Love,2001; Irani, 2002). In practice, IS evaluation has not been given a high level of importance inorganizations, and is often overlooked. In fact, it has been suggested that the process of ISevaluation could be further improved if it is were viewed from a social perspective so that theimpact of IS implementation from a user perspective could be better understood. Indeed, manyauthors have argued this perspective convincingly, with empirical evidence available tostrengthen their case (Walsham, 1999; Serafeimidis & Smithson, 2000; Wilson & Howcroft,2000; Irani & Love, 2001; Jones & Hughes, 2001). Therefore, IS evaluation should be viewedas a process of experiential and subjective judgement, which is grounded in opinion and worldviews, and therefore challenges the predictive value of traditional investment methods.

There remain serious implications with not carrying out any form of evaluation during the e-government infrastructure-building process. Willcocks & Lester (1994) conclude that evalua-tion of IS projects is rarely undertaken, which results in many private organizations beingunable to determine the impact of their deployment. There appears to be a similar perceptionin the public sector, where in April 2000 the UK central government was forced to initiate theBest Value (BV) legislation, which imposes a responsibility on public sector organizations,including local authorities, to ensure and demonstrate that services are effective, efficient andperforming satisfactorily. Part of this BV process includes understanding the impact of ISinvestments. The rationale for this is that information systems lie at the genesis of e-govern-ment infrastructures, thus suggesting that evaluation needs to be seen as a key part of the BVprocess, by better understanding associated benefits, costs and risks.

A

ROBUST

METHODOLOGY

TO

EXPLORE

E

-

GOVERNMENT

INFORMATION

SYSTEMS

EVALUATION

The researchers found from their previous studies that a lack of robust evaluation is primarilyattributable to

not

considering human and organizational factors. This underlines the signifi-cance of ‘soft’ contextual issues associated with the

ex-ante

evaluation process (Irani & Love,

Page 6: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

66

2001). Such consideration supports the development of lessons that can act as a frame of ref-erence for aiding the evaluation of e-government projects. Consequently, there is a need for aresearch methodology that engages an organization and its senior staff.

Structured case method

The structured case method (Carroll

et al

., 1998; Carroll & Swatman, 2000) has been widelyused to extend knowledge about the way in which local authorities evaluate their e-governmentprojects. The structured case provides a

focused

and yet

flexible

,

structured

and

dynamic

approach to the field research process, through the following:

1

a focused view on maximizing the benefits of scarce resources (time, manpower andmoney);

2

a flexible approach to allow theory and knowledge to emerge from the data collected andintegrate unexpected outcomes;

3

guiding the researcher to follow and assure rigour; and

4

the dynamic ability of being able to record the processes of knowledge and theory-building.

The structured case method seeks to build theory, which may be seen as ‘

a system of inter-connected ideas that condense and organise knowledge

’ (Neuman, 1991, p. 30). Thisapproach attempts to explain, predict and provide understanding about the phenomena underenquiry. The purpose of adopting such an approach is to discover and discuss relationshipsbetween concepts, so as to build a ‘web of meaning’ and learning in this instance, with respectto the human and organizational issues of information systems evaluation. The developmentof a series of Conceptual Frameworks (CF) is used to demonstrate the

process

of knowledgegeneration, where CF

n

is the latest version of understanding surrounding the phenomena. Theknowledge-discovery process is not only inductive but is interrelated with practice. Appliedresearch that uses this approach can lead to theory-building, which can, in turn, lead to furtherfield research and more theory-building. The research cycle therefore leads to changes to theCF by extracting meaning (context) through the case enquiry.

Conceptual framework (CF

1

)

Although research covering IS evaluation in the public sector is limited, there are some pointersto concepts that need to be part of the initial conceptual framework (CF

1

) presented in Figure 1.The historical growth of e-government, with its emphasis on central policy directives (CabinetOffice, 1999; 2000), produces a unique context for investment decision-making. Thus, the firstmajor conceptual division of IS evaluation within the framework is

investment decision-making

.Reports of scepticism (Jones & Hughes, 2001) and managerial myopia (Irani

et al

., 2001) iden-tifying the body responsible for the decision, and the basis for the decision (cost, benefit andrisk), has resulted in the need to examine key questions.

The introduction of BV legislation in April 2000 counters the private sector tendency to avoidex-post evaluation (Kumar, 1990; Willcocks & Lester, 1994). However, the notion of value in the

Page 7: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

67

public sector is uncertain (Heeks, 1999; Bannister, 2001), and as the earlier literature reviewshows, a plethora of different approaches to evaluation are available. This leads to the secondmajor conceptual component within CF

1

and the key questions about the notion of value andchoice of method.

The case for looking at a stakeholder perspective during the evaluation process has beenwell justified through the literature (e.g. Willcocks & Lester, 1994; Pouloudi & Whitley, 1997;Khalifa

et al

., 2000). Any information system is part of a human social structure, and theimplementation of IS can have cascading negative effects throughout an entire organization,if cultural and structural issues are not adequately addressed (Hamlyn, 1993; Benyon-Davies, 1995). The pervasive influence of organizational culture and structure means theyshould be seen as a major driver in all decision-making processes. As a result, the frame-work highlights culture and structure, and seeks to make explicit their impact on all elementsof IS evaluation.

The research cycle and data collection

Following a comprehensive review of the normative literature, an initial conceptual framework(as presented in Figure 1) for public sector information systems evaluation is proposed, whichis then explored within two local government authorities. The data collection procedure fol-lowed the major prescriptions by most textbooks in doing fieldwork research (e.g. Yin, 1989).Data were collected from a variety of sources that included interviews, observations, illustrativematerials (e.g. newsletters and other publications that form part of the case study of the orga-nization’s history) and past project documentation.

A total of 12 interviewees at different layers in the organizational hierarchy across both casestudies were used to extract primary data. The duration of each interview was approximately30 minutes, where every interview was conducted on a one-to-one basis, so as to stimulate

Figure 1.

CF

1

: Initial framework for public sector IS evaluation.

IS Evaluation

Investment DecisionsWho and How?

Factors ∑ Costs, Benefits, Risks

Evaluation Methods Value? Which one?

Concept of Value Choice of Method

Culture & Structure

Page 8: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd,

Information Systems Journal

15

, 61–82

68

conversation and break down any barriers that may have existed between the interviewer andinterviewee. The authors acted as a neutral medium through which questions and answerswere transmitted, thus endeavouring to eliminate bias. Bias in interviews occurs when the inter-viewer tries to adjust the wording of the question to fit the respondent, or records only selectedportions of the respondent’s answers. However, interviewer bias most often results from theuse of probes; these are follow-up questions that are typically used by interviewers to getrespondents to elaborate on ambiguous or incomplete answers (Jick, 1979; Shaughnessy &Zechmeister, 1994). In trying to clarify the respondent’s answers, the interviewer was carefulnot to introduce any ideas that may form part of the respondent’s subsequent answer. Fur-thermore, the interviewer was also mindful of the feedback that respondents gained from theirverbal and non-verbal responses. As a result, the interviewer avoided giving overt signals suchas smiling and nodding approvingly when a respondent answered or failed to answer aquestion.

CASE

STUDY

ONE

This case study concerns a UK unitary local authority that provides a range of public services,including Education, Social Services and Highways. The population is 147 000; the staffingestablishment is 6000; the annual revenue budget is £150 million; and the annual IT revenuebudget is £2.5 million. Six senior information systems stakeholders charged with deliveringagainst central government targets were interviewed as part of this case study. These were theHead of IT, an IT Account Manager, the IT Operations Manager, the Assistant Director ofFinance, a senior Social Services Manager and an Assistant Chief Executive.

The research protocol focused on three main topics, namely, culture, investment decisionsand evaluation methods, which are unified around a core category of IS evaluation. The threemain categories are briefly summarized below and offer context surrounding CF

1

.

Organizational culture and structure

The organization sees itself as progressive and does not wish to lag behind other authoritieswith regard to service delivery and e-government infrastructure. However, at odds with thisvision is a recent internal reorganization that has resulted in a reduction in headcount, and par-ticularly management’s capacity to drive initiatives forward, such as IS implementation andevaluation. In common with public sector tradition, the organization is averse to risk. Seniordecision makers do not always see potential benefits or wish to explore the risks of informationsystem implementations, and therefore there is a lack of both vision and risk-management/mit-igation. Generally, departments do not act corporately but instead, compete for

political rec-ognition, status

and

resources

. Departmental objectives are idiosyncratic, and there is often alimited corporate approach to issues, including IS evaluation. This is characterized by corpo-rate project inertia, rather than open disagreement. Departments appear to be overtly rational,while covertly negotiating private interests.

Page 9: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd,

Information Systems Journal 15, 61–82

69

From an IS perspective, this had led to a lack of integration of systems and a reluctance ofdepartments to share information and access to systems. Despite this, IS are seen as lying atthe heart of much of the local authorities’ service provision. Information system implementationand use is recognized as supporting front-line service delivery. Interviewees contended thatthe local authority could not carry out its functions efficiently or effectively without the use of ISand its associated infrastructure. The importance of successful IS deployment is widely rec-ognized and sought, and seen as a critical success factor in delivering services to citizenstogether with the broader e-government agenda dictated by the central government.

Investment decisions

Information systems investment decisions are normally undertaken in isolation by informationsystem departmental management. They are largely intuitive and political in nature, driven byparticularly motivated individuals in the organization. Information systems investment deci-sions are usually taken by the Head of IT, who maintains that he or she understands thepriorities of the organization and how IS should be contributing towards the local authority’s e-government agenda, and is therefore best placed to make IS investment decisions. It hadproved difficult for the Head of IT to engage other senior managers in IS investment decisions.It appeared that the main reasons were a lack of interest and management time. The latter rea-son has been compounded with the recent reduction in headcount and therefore capacity, atmanagerial level. Currently, IS investment decision-making in the case study organization islargely an intuitive act.

The major focus for IS investment decision-making is cost reduction, which is supported bythe central government’s BV agenda. There is the perception that if a financial number is putupon an IS project by accountants or IS vendors, then it must be correct and therefore rarelychallenged. It was noted by interviewees that IS rarely stands still, and therefore the evaluationprocess is not static but dynamic. There are always subsequent IS costs (direct and indirect)that need consideration and which are often excluded from initial costing and decision-makingmodels. Potential dis-benefits, such as adverse affects on working practices or unfavourableoutcomes on the organization, were not identified or considered. This was seen to be a culturalinheritance from past traditional investment decision management. The reason for this is thatrisk analysis, or other IS evaluation approaches that include dis-benefits, are not engaged withby management. With hindsight, the interviewees recognize that to proactively educate userson this issue would be useful and could be achieved by improving understanding of the poten-tial IS impact, which could lead to improved decision-making processes.

Evaluation methods

All interviewees acknowledged that IS evaluation is important to enable problems and issuesto be identified and resolved. However, IS evaluation has not in the past been given a high levelof importance because of a lack of understanding, and confusion surrounding the large num-ber of techniques available to decision makers. There is no significant pre- or post-implemen-

Page 10: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

70

tation evaluation of IS, except for a broad estimate of costs, or very occasionally, post-implementation reviews undertaken by the internal audit section at the request of senior man-agement (usually when a system has significant problems). Regardless, the documentationassociated with these activities suggests that evaluation as a process was rather verbose anddid not send back into the organization learnt issues. By not doing so, the organization is notable to learn from past mistakes and mature through experience. Several reasons were citedfor not undertaking formal evaluation, including ‘it is not necessary’, ‘it is too difficult’, ‘it is tooresource intensive’ and ‘it is too costly’. Interviewees regarded evaluation methods as being oflittle value, and accountancy driven. The reason for this is that they are based upon economicfactors (grounded in competitive motivation), which have little meaning in the public sector.Similarly, tribal boundaries between departments prevent co-operation and the free flows ofcommunication, largely because of the hierarchical bureaucratic public sector structure, whichis at odds with private sector practices of lean management. However, as investment appraisalmethods are not used, it is impossible for the organization to evaluate their IS, and to gaugeto what extent their investment delivers anticipated benefits.

It is acknowledged that informal subjective IS evaluation takes place in the user commu-nity. However, this interpretive evaluation is not being gauged or measured. User opinion islargely ignored because it has not been considered valuable to date and is not articulatedand subsequently incorporated into any IS evaluation process. Information systems evalua-tion grounded in user opinion, which tells the organizational story, is not exposed and raisessignificant challenges for future investments that support the authority’s e-governmentstrategy.

Based on the findings from case study one, the structured case method requires a review ofthe conceptual framework before embarking on the second case study. The basic structure ofFigure 1 appears to have been effective, with the study yielding critical data on both the deci-sion-making process and embedded evaluation methods. In particular, the culture and struc-ture of the organization sheds light on the reasons behind its approach to IS evaluation.Appendix 1 indicates how the statements from the participants were grouped to refine con-cepts. Figure 2 summarizes the key concepts that were identified and added to the initial con-ceptual framework (CF1) to form CF2.

CASE STUDY TWO

The second case concerns another UK unitary local authority that provides a similar range ofpublic services as in case study one. This was seen as an important factor when selecting thesecond case, as consistency will allow readers to draw cross-parallels; however, it is worth not-ing that the authors are not seeking to compare cases. Case two has a population of 129 000,a staffing establishment of 7000, an overall annual revenue budget of £157 million and anannual IT revenue budget of £2.2 million. As in the first case, six stakeholders were inter-viewed; they were considered to have sufficient depth of knowledge when exploring the eval-uation of e-government infrastructure decisions. Interviews took place with the Head of

Page 11: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

71

Information, Communications and Technology (ICT), the ICT Operations Manager, the DeputyCounty Treasurer, a senior Social Services Manager, a senior Housing Manager and a seniorFinance Manager.

Organizational culture and structure

The organization has a traditional hierarchical structure but is divided into three areas termed‘shires’ for operational purposes. This structure has led to some strategic and operational dif-ficulties, mainly due to the confusion of roles and responsibilities, which have not been clearlydefined by senior management. The structure has impacted upon the authority’s ability to acton a corporate basis and has led to departmental autonomy, and a lack of cohesion. On occa-sions, the structure has led to tension between the three shires and departments. This has animpact upon the authority’s ability to manage, monitor, review and evaluate issues on a con-sistent basis, including IS.

Information systems have traditionally been used as a tool in an attempt to structure andstandardize organizational practices across the organization, within individual departments inthe three shires. However, this has been largely unsuccessful. The reason for this is that work-ing practices are not consistent across the structure to facilitate the successful use of infor-mation systems. There are no official or formal forums to discuss IS investment decisions ortheir evaluation. It would appear that much decision-making is initiated by chance meetingsthat occur between senior IS managers and other senior functional managers.

Figure 2. CF2: Emergent concepts.

Culture & Structure Investment Decisions Evaluation Methods

Constraints:

Available resources

Departmental structure

Attitudes:

To technology and IS

To risk

To corporate planningTo collaboration

Decision maker’s roles:

Primary decision

Senior managementConsultation

Factors and sources:

Main drivers

Direct costs Non-cash benefits

Indirect costs Dis-benefits

Other issues:

Future developments

Concept of value:

Financially based

User opinion

Post hoc evaluation:

Importance

Why done

Why not doneWho does it

Outcomes

Choice of methods:

Methods usedMethods not used

Page 12: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

72

Investment decisions

The Directors Management Team (DMT) of the organization is not IS literate and is not inter-ested in specific IS implementations, but is mandated by a broader agenda (including e-government). This has resulted in IS issues being dealt with further down the managementstructure, at second- or third-tier levels. However, DMT and senior management in general areconcerned about the high level of IS expenditure in proportion to the perceived benefits beingdelivered through e-government.

Management decision-making to invest in information systems is considered obvious andcommon sense. There are no management processes or procedures to follow with regard toIS investment decisions, and rules tend to be made up depending on the individual circum-stances of a given situation. Typical reasons for information systems investment decisioninclude obsolete IS; potential cost savings; withdrawal of vendor support; new working pat-terns; support for e-government, or new legislation. Senior management often react with arbi-trary gut feelings or react as an act-of-faith to IS investment decisions. Directors sometimes setaside finance for IS investments when there is a perception that IS can resolve a problem orimprove a problem situation for stakeholders, or is fundamental to e-government service deliv-ery. A key management issue that emerged is that the quality of service delivery is far moreimportant than cost. This, perhaps, is not surprising as public sector performance metrics areheavily driven by internal and external politics.

A great deal of work is required to produce a business case for IS investments (including e-government). Furthermore, it is believed that there are significant difficulties with regard toquantifying costs and identifying potential efficiency and effectiveness improvements. A formalbusiness case therefore is rarely undertaken. The management decision to invest in informa-tion systems is seen as being largely obvious, with this ad hoc approach being considered asa valid approach by decision makers. Furthermore, the organization does not see any merit inspending weeks compiling a business case, when benefits are very difficult to measure and,in practice, not assessed and often superseded by political measures. However, key stake-holders (including elected members) will be ‘sounded out’, unofficially, for their views and sup-port in order to obtain tacit approval in principle.

Evaluation methods

The organization recognizes that IS evaluation is not being addressed and that time is not setaside to undertake this function. Resources are a major issue, and it was suggested that ISevaluation could be undertaken if adequate specialist resources were allocated for the task.However, the organization is not sure about what is required, as there is little knowledgeretained within it. The organization acknowledges that IS evaluation should be undertaken butbelieves that traditional evaluation methods are inappropriate and remain financial in natureand are motivated by productivity gains. This is of major concern to the senior managementbecause IS evaluation is central to BV.

Page 13: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

73

User opinion is unofficially canvassed to informally legitimize the choice of information sys-tems and fit with e-government. Information technology staff do make an effort to speak to peo-ple to find out successes and problems. Nonetheless, this approach is not formalized ordocumented. Interviewees anticipated that any future IS evaluation approach that may beadopted to legitimize investment decisions should not be based upon any economic model.Rather, the approach should be based upon evaluating the system from the point of view ofstakeholders, and comparing the impact of the IS against the objectives of the organization.One interviewee gave an interesting insight into the contextual nature of evaluation, stating that‘you never stop assessing IS, it is something you do. You assess all aspects of your job’.

Interviewees recognized that informal IS evaluation occurs within the organization via useropinion, and that this aspect, although important, is not valued during the investment evalua-tion process. It was recognized that user opinions could be negative, but this was not per-ceived as a problem but rather a challenge. The reason for this was that any criticism wouldbe viewed as being constructive. However, the problem would need identifying in an attemptto improve the situation from the user point of view, as individuals perceive situations fromindividual circumstances. Therefore, any form of interpretive IS evaluation has hidden agen-das, with personal viewpoints emerging. However, this could be used to initiate dialoguebetween stakeholders’ groups and perhaps improve working relationships and understanding.Such an interpretive evaluation process could be regarded as a means of encouraging co-operation, involvement and commitment of stakeholders, and is considered realistic in thepublic sector.

Political and social issues heavily influence organizational life, including IS aspects. That isto say that the context in which the information system is deployed plays an important role inany evaluation approach. It was acknowledged that decisions are often political, and that eval-uation is always subjective. The organization recognizes that it is impossible to impose a ratio-nal way of undertaking IS evaluation via mechanistic methods. The reasons for this include theovert political culture, irrational management decision-making process and irrelevance of eco-nomic evaluation metrics in the public sector domain, all of which represent strong influencingfactors.

REVISED CONCEPTUAL FRAMEWORK (CF 3)

The findings from the second case study have enabled a framework for the evaluation of e-gov-ernment projects to be proposed. Concepts that emerged from the case studies can be seenin Appendix 2. Although most of the interviewees’ statements from case study two align with,or are similar in nature to, those identified in the first case, some additional factors did none-theless arise. In discussing decision-making there is significantly more emphasis on the deci-sion process. The discussion also uncovers deeper consideration of the selection criteria forchoosing between alternative evaluation methods. Finally, two attitudinal attributes were iden-tified, referred to as ‘IS literacy’ and ‘perceived IS costs’ at senior management level.

Page 14: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

74

The other issues element under investment decisions is contrived and growing. The state-ments attributed under this heading might be more rationally seen as cultural attitudes. Theplacement of post hoc evaluation as subsidiary to evaluation methods does not give appropri-ate weight to the activity, which should be on a par with investment decisions. Similarly, factorssuch as costs, benefits and risks placed under investment decisions are applicable whenundertaking post hoc evaluations. Taking these points into consideration, a fourth tier to theframework is added, as shown in Figure 3.

DISCUSSION

Although the two studies are distinctive, with different issues emerging within each case, thereare also strong parallels between the two organizational views. If anything, there are more

Figure 3. CF3: Proposed framework for public sector IS evaluation.

Investment Decisions

Who and How?

Decision maker’s roles ∑ Primary decision∑ Senior management∑ Consultation∑ Decision process Main drivers

Post hoc Evaluation

When and How?

Importance Who does itOutcomesJustifications∑ Why done? ∑ Why not done?

Evaluation Methods

Value? Which one?

Concept of value ∑ Financially based ∑ User opinionFactors and sources ∑ Direct costs ∑ Cash benefits∑ Non-cash benefits ∑ Indirect costs ∑ Dis-benefits ∑ Risks Choice of methods ∑ Methods used∑ Methods not used∑ Criteria

Culture & Structure

Constraints ∑ Available resources ∑ Departmental structure Attitudes∑ To technology and IS

o IS/IT literacy o Perceived cost

∑ To risk∑ To corporate planning∑ To collaboration

IS Evaluation

Page 15: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

75

conflicts between statements within each organization than there were between the twoviews. A key finding was that many of the interviewees from both organizations considereduser opinion to be an extremely important aspect of organizational life. It is believed to havethe potential to produce deep insights and to ground the IS evaluation process. However,these personal beliefs do not translate into any formal process or common corporate recog-nition of value. Improving IS evaluation may lie with the articulation of the experiences of mul-tiple stakeholders, which are grounded in tacit knowledge rather than through the use ofmechanistic methods. Although, there is a considerable amount of literature on the use of for-mal IS evaluation methods, the case study organizations did not use such methods. Contex-tual IS evaluation could provide an approach accommodating the multiple perspectives ofinformation system users. This would elicit knowledge and understanding of the impact of ISimplementation drawing on ‘situated’ practice (Suchman, 1993) to tell the organizational story.This IS evaluation approach would be local, embodied, emergent, contingent and constantlychanging.

It appears that a key issue facing UK public sector organizations is demonstrating to seniormanagers that IS implementation delivers value. Furthermore, BV legislation challenges publicsector organizations to demonstrate BV, which includes an assessment of information systemdeployments. Both case studies acknowledge that there is a lack of IS evaluation, which makesit impossible to judge whether the information system is successful in delivering against theobjectives used to justify its adoption (in these cases also against a broader e-governmentagenda). It is common ground that financially based methods are inappropriate but the orga-nizations lack the resources and knowledge to select and apply methods from the alternativesavailable. Indeed, a lack of experience leaves it unclear whether appropriate methods areapplicable for the public sector. Regardless, a starting point exists through the centrallyendorsed BV approach. The prevailing situation is one of increasing IS deployment, but wherechief executives continue to be generally dissatisfied with the outcomes from IS expenditure.The authors suggest that organizations will need to address interpretivist IS evaluation models,rather than continuing to defend the intuitive gut feel and act-of-faith methods prevalent in curr-ent practice.

Through the empirical work presented, the authors sought to extrapolate the experiences ofsenior stakeholders to gain an understanding of IS evaluation in public sector organizations.No claim for generalization is made for interpretive research of this type, and as a result the les-sons learnt are a result of the description provided and do not seek to be prescriptive. However,the richness of the case studies does lead to potential lessons for IS practitioners and hasresulted in an emerging model.

Given the principles behind BV legislation, the current practices and values revealed withinthe framework make it clear that the notion of ‘value’ is not adequately understood. Thisleads us to the first five lessons to be extrapolated from this study and grounded in theliterature:

Lesson 1 A balance must be found between benefit realization, cost reduction and risk man-agement if e-government initiatives are to be successful. The traditional norm of cost

Page 16: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

76

being subservient to service quality is not appropriate under central governmentpublic sector reforms and will limit the ‘take-up’ and success of e-government.

Lesson 2 Organizations must question the underlying assumptions of existing informationsystem evaluation methods and seek alternative perspectives regardless of whetherthese methods fail to achieve desired objectives and outcomes.

Lesson 3 Information system evaluation must have an explicit concern for the social context,and interpretive methods must be adopted that explicitly situate stakeholders at thecentre of the evaluation process as the impact of e-government is considered farreaching.

Lesson 4 The views, beliefs and assumptions of stakeholders must be exposed and consid-ered within the information system evaluation process, and not be tangential to it.

Lesson 5 Organizations must have an interpretive IS evaluation approach to complementthe information system development, project management and implementationfunctions.

It is not simply that organizations do not know how to evaluate their information systemsassociated with e-government but that senior management see little purpose in doing so, orsimply lack interest, despite their reservations about high costs and low benefits from IS invest-ment. This leads us to two further lessons:

Lesson 6 Organizations should commit adequate resources to fully evaluate IS via interpretiveapproaches. Information systems lie at the core of e-government and have numer-ous social, political and technical dimensions.

Lesson 7 Situation and context are critical; prescriptive evaluation procedures are insufficientand a full understanding of the domain, at all levels, is essential to successfullyundertake interpretive evaluation.

The final two lessons follow from the observations of structural and cultural factors inhibitingboth IS development and evaluation:

Lesson 8 The hierarchical and political nature of public sector organizations creates a barrierto change, and this must be overcome to ensure interpretive approaches are suc-cessful in practice.

Lesson 9 Organizations need to seek ways to overcome political barriers and develop a suit-able culture conducive to facilitating an interpretive approach.

CONCLUSIONS

A by-product of this investigation into public sector IS evaluation is a proposed framework thathas evolved from two in-depth case studies. Given the belief that user opinion is critical to e-government success, the choice of boundaries and stakeholder representatives is clearly crit-ical. It would be interesting to see how far the framework would need to evolve if it is appliedwith a wider stakeholder community. Nonetheless, the framework represents a starting point by

Page 17: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

77

offering a potential model for other studies in the public sector. This paper draws on two directlyelected, unitary authorities with diverse functional roles. The public sector also includes orga-nizations with non-elected governance, multitier authorities and ones with single functionresponsibilities.

Where information systems evaluation is undertaken, formal methods tend to prevail. How-ever, there continues to be disagreement over their usefulness and over which model toadopt, if any. In practice, these formal methods are often not used rigorously or are used asmeans of political justification. Therefore, it is difficult to demonstrate that formal evaluationmethods work well when evaluating e-government and that IS investments deliver their antic-ipated effect and benefits. Indeed, much of the empirical work explored in this paper con-cludes that information systems evaluation should be viewed as a process of experiential andsubjective judgement, which is grounded in opinion and world views, and therefore chal-lenges the predictive value of traditional investment methods. Such interpretive evaluationcould be regarded as a means of encouraging co-operation, involvement and commitment ofstakeholders, and is considered apt for the public sector. Clearly, there is dissatisfaction withtraditional IS evaluation processes in the public sector. This research has highlighted thatmany information system e-government decisions are political and that evaluation is alwayssubjective. Local government management and stakeholders involved in decision-making rec-ognize that it is impossible to impose a rational way of undertaking IS evaluation via mecha-nistic methods, owing to the political culture, irrational decision-making processes andirrelevance of economic metrics in the public sector domain. Traditional methods of appraisalhave failed to satisfy the concerns of senior managers when evaluating their e-governmentinfrastructures. Many of these concerns relate to a lack of knowledge and understanding withregard to the value, benefit and level of success that is ultimately delivered through IS invest-ments. These concerns cannot be addressed and the future outlook cannot improve unlessthe public sector gives sufficient attention to information system evaluation issues, and seeksalternative approaches that place greater emphasis on evaluation having a multistakeholderperspective.

ACKNOWLEDGEMENTS

The authors would like to thank the case study authorities for participating in this project. With-out its co-operation and support from management and employees this research could nothave been undertaken. Finally, the authors would like to acknowledge the financial support pro-vided by the Engineering and Physical Sciences Research Council (GR/R08025).

REFERENCES

Ballantine, J.A. & Stray, S.J. (1999) Information systems

and other capital investments: evaluation practices

compared. Logistics and Information Management, 12,

78–93.

Page 18: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

78

Bannister, F. (2001) Dismantling the silos: extracting new

value from IT investments in public administration. Infor-

mation Systems Journal, 11, 65–84.

Bannister, F. & Lalor, S. (2001) Towards an Ethical Frame-

work for E-Government. Proceedings of the European

Conference on E-Government, 27–28 September 2001,

Trinity College, Dublin, pp. 15–28.

Benyon-Davies, P. (1995) Information systems ‘failure’:

the case of the London Ambulance Service’s computer

aided despatch project. European Journal of Information

Systems, 4, 171–184.

Cabinet Office (1999) Modernising Government. HMSO,

London, UK.

Cabinet Office (2000) E.Gov – Electronic Government Ser-

vices for the 21st Century. Performance and Innovation

Unit, HMSO, London, UK.

Carroll, J., Dawson, L.L. & Swatman, P.A. (1998) Using

Case Studies to Build Theory: Structure and Rigour.

Proceedings of 9th Australasian Conference on Informa-

tion Systems, 30 September-2 October, University of

NSW, Sydney, Australia.

Carroll, J. & Swatman, P. (2000) Structured-case: a meth-

odological framework for building theory in information

systems research. European Journal of Information Sys-

tems, 9, 235–242.

Chenery, J. (2001) Seamlessly serving citizens. The busi-

ness of public sector procurement. Summit, March, 19.

Choudrie, J. & Lee, H. (2004) Broadband development in

South Korea: institutional and cultural factors. European

Journal of Information Systems, 13, 103–114.

Davies, R. & Chalk, T. (1996) Form and function: publishing

the Canadian Government Weekly Checklist on the

internet. Proceedings of the Asia Annual Meeting, 33,

24–29.

DTI (1998) Our Competitive Future – Building the Knowl-

edge Driven Economy. Department of Trade and Indus-

try, HMSO, London, UK.

DTLR (2002) E-Gov@Local: Towards a National Strategy

for Local E-Government. Department for Transport Local

Government and the Regions, HMSO, UK.

Eschenfelder, K.R.J.C., Beachboard, C.R., McClure, A. &

Wyman, S.K. (1997) Assessing US Federal Government

websites. Government Information Quarterly, 14, 173–

189.

Farbey, B., Land, F. & Targett, D. (1993) IT Investment: A

Study of Methods and Practices. Management Today

and Butterworth-Heinemann Ltd, UK.

Gronlund, A. (2000) European electronic service infra-

structure building – drifting into the future? Challenges of

Information Technology Management in the 21st Cen-

tury, pp. 330–333. Hershey, Idea Group, Boston, MA,

USA.

Hall, R. (1998) New electronic communication from local

government – marginal or revolutionary? Local Govern-

ment Studies, 24, 19–33.

Hamlyn, K.L. (1993) Report of the Inquiry into the London

Ambulance Service. Prince User Group Ltd. & Binder

Hamlyn, London, UK.

Heeks, R. (1999) Reinventing Government in the Informa-

tion Age. Routledge, London.

Hochstrasser, B. (1992) Justifying IT Investments. Confer-

ence Proceedings: AIS; The new technologies in today’s

business environment, pp. 17–28.

Horrocks, I. & Hambley, N. (1998) The ‘webbing’ of British

local government. Public Money and Management, 18,

39–44.

Huang, C.M.J. & Chao, M.H. (2001) Managing WWW in

public administration: uses and misuses. Government

Information Quarterly, 18, 357–373.

Irani, Z. (2002) Information systems evaluation: navigating

through the problem domain. Information and Manage-

ment, 40, 11–24.

Irani, Z. & Love, P.E.D. (2001) The propagation of technol-

ogy management taxonomies for evaluating investments

in information systems. Journal of Management Infor-

mation System, 17, 161–177.

Irani, Z. & Love, P.E.D. (2002) Developing a frame of ref-

erence for ex-ante IT/IS investment evaluation. Euro-

pean Journal of Information Systems, 11, 74–82.

Irani, Z., Sharif, A.M. & Love, P.E.D. (2001) Transform-

ing failure into success through organisational learn-

ing: an analysis of a manufacturing information

system. European Journal of Information Systems, 10,

55–66.

Irani, Z., Themistocleous, M. & Ghinea, G. (2004) Devel-

oping Infrastructure Guidelines to Support Pull/push E-

Government Information Technology (DIG-IT). CD-ROM

Proceedings of the European and Mediterranean Con-

ference on Information Systems (EMCIS) 2004, 25–27

July, Tunis, Tunisia.

Jick, T.D. (1979) Mixing qualitative and quantitative meth-

ods: triangulation in accumulation. Administrative Sci-

ence Quarterly, 24, 602–611.

Jones, S. & Hughes, J. (2001) Understanding IS evaluation

as a complex social process: a case study of a UK local

authority. European Journal of Information Systems, 10,

189–203.

Khalifa, G., Irani, Z., Baldwin, L.P. & Jones, S. (2000) Eval-

uating Information Systems with You in Mind. Proceed-

ings of the Seventh European Conference on

Page 19: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

79

Information Technology Evaluation (ECITE2000), 28–29

September, Trinity College, Dublin, pp. 117–132.

Kumar, K. (1990) Post implementation evaluation of

computer-based information systems. Communications

of the ACM, 33, 203–212.

Lefley, F. & Sarkis, J. (1997) Short-termism and the

appraisal of AMT Capital projects in the US and UK. Inter-

national Journal of Production Research, 35, 341–355.

Margetts, H. & Willcocks, L. (1993) Information technology

in public services: disaster faster? Public Money and

Management, April–June, 49–56.

McMullen, S. (2000) US government information: selected

current issues in public access vs. private competition.

Journal of Government Information, 27, 581–593.

Mon, L. (2000) Digital reference service. Government

Information Quarterly, 17, 309–318.

Neuman, W.L. (1991) Social Research Methods: Qualita-

tive and Quantitative Approaches. Allyn and Bacon, Bos-

ton, MA, USA.

Oates, B.J. (2002) Foot and Mouth Disease: Constructing

and Serving Multiple Audiences. Proceedings of UKAIS

2002 Information Systems, Leeds Metropolitan Univer-

sity, UK. UK Academy for Information Systems, UK.

Pouloudi, A. & Whitley, E.A. (1997) Stakeholder identifica-

tion in inter-organizational systems: gaining insights for

drug use management systems. European Journal of

Information Systems, 6, 1–12.

Remenyi, D., Money, A., Sherwood-Smith, M. & Irani, Z.

(2000) Effective Measurement and Management of IT

Costs and Benefits. Butterworth-Heinemann, Oxford,

UK.

Serafeimidis, V. & Smithson, S. (2000) Information sys-

tems evaluation in practice: a case study of organisa-

tional change. Journal of Information Technology, 15,

93–105.

Shaughnessy, J.J. & Zechmeister, E.B. (1994) Research

Methods in Psychology, 3rd edn. McGraw-Hill, USA.

Smithson, S. & Hirschheim, R.A. (1998) Analysing infor-

mation systems evaluation: another look at an old

problem. European Journal of Information Systems, 7,

158–174.

SOCITM (2002) Local E-Government Now, 2002 – There

for the Taking. Society of Information Technology Man-

agement, Northampton, UK.

Steyaert, J. (2000) Local governments online and the role

of the resident – government shop versus electronic

community. Social Science Computer Review, 18, 3–16.

Suchman, L. (1993) Response to Vera and Simon’s situ-

ated action: a symbolic interpretation. Cognitive Sci-

ence, 17, 71–75.

Walsham, G. (1999) Interpretive evaluation design for infor-

mation systems. In: Beyond the IT Productivity Paradox,

Willcocks, L. & Lester, S. (eds), pp. 106–149. Wiley,

Chichester, UK.

Willcocks, L. & Lester, S. (1994) Information technology:

transformer or sink hole. In: Beyond the IT Productivity

Paradox Willcocks, L. & Lester, S. (eds), pp. 9–29.

Wiley, Chichester, UK.

Wilson, M. & Howcroft, D. (2000) Politics of IS Evaluation:

A Social Shaping Perspective. Proceedings of the Inter-

national Conference on Information Systems, Decem-

ber, Brisbane, Australia.

Yin, R.K. (1989) Case Study Research – Design and Meth-

ods. Applied Social Research Method Series, Vol. 34.

Sage Publications, Newbury Park, CA, USA.

Biographies

Professor Zahir Irani is the Head of Information Systems

and Computing at Brunel University (UK). Having worked

for several years as a project manager, Professor Irani

retains close links with industry, and is a non-executive

director to a leading engineering company. He consults for

the Office of the Deputy Prime Minister in the UK as well

as international organizations, such as Royal Dutch Shell

Petroleum, DERA, BMW and Adidas. Professor Irani

reviews research proposals submitted to UK funding coun-

cils, European Commission and the National Science Foun-

dation in the USA. He is the Editor-in-Chief of the

established Journal of Enterprise Information Management

and European Editor of the Business Process Management

Journal. He has co-authored a teaching textbook on infor-

mation systems evaluation, and written over 150 interna-

tionally refereed papers and received ANBAR citations of

research excellence. Professor Irani is on the editorial board

of several journals, as well as co- and mini-track chair to

international conferences such as AMCIS, HICSS and

EMCIS. He has published in the Journal of Management

Information Systems, European Journal of Information Sys-

tems, Information & Management, Information Systems

Journal and IEEE Transactions. Professor Zahir Irani has

received numerous grants and awards from EPSRC, ESRC,

Royal Academy of Engineering, Australian Research Coun-

cil, DERA and European Commission.

Professor Peter Love is a Professor in the School of

Management Information Systems at Edith Cowan Univer-

sity (Australia) and Director of Research for the Working for

e-Business (We-B) Centre. Dr Love acts as the Asia-

Pacific Editor for the Logistics Information Management

Page 20: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

80

Journal and Business Process Management Journal, the

Educational Advisor to the Hong Kong Institute of Real

Estate and has acted as a Visiting Professor to Brunel Uni-

versity in the UK and Hong Kong Polytechnic University. He

retains close links with industry and serves on several gov-

ernment working committees. He has an MSc in Construc-

tion Project Management from the University of Bath in the

UK and a PhD in Operations Management from Monash

University in Australia. He has a wide range of industry

experience, which he gained in the UK and Australia work-

ing as consultant project manager and a commercial man-

ager for a multinational construction and engineering

organization. Dr Love has been a recipient of research

grants from the Australian Research Council (ARC), Engi-

neering and Physical Sciences Research Council

(EPSRC) and Research Grants Council (RGC) in Hong

Kong. In addition, he regularly reviews research proposals

submitted to the ARC, EPSRC, RGC and South African

National Research Foundation. Dr Love has a multidisci-

plinary background and has varied research interests,

which include supply chain management, knowledge man-

agement, project management, e-Business, e-Procure-

ment, strategic sourcing of IT, IT personnel management

and strategic information systems evaluation. He serves

as an editorial advisory board member for several leading

international journals, and has acted as co- and mini-track

chair at numerous international conferences. Dr Love has

also received numerous grants and awards from the ARC,

EPSRC and RGC. He has co-authored/edited four books

and has authored/co-authored over 200 internationally ref-

ereed research papers, which have appeared in leading

international journals such as European Journal of Infor-

mation Systems, Information and Management, Journal of

Management Information Systems, International Journal

of Production Economics, International Journal of Opera-

tions and Production Management and International Jour-

nal of Project Management.

Dr Tony Elliman is a Senior Lecturer in the Department

of Information Systems and Computing at Brunel Univer-

sity. He has been active in research concerned primarily

with the development of information systems for knowl-

edge workers. He is a member of the ISEing and has expe-

rience of using simulation techniques in a variety of

contexts. Having trained as an Electrical Engineer with

International Computers Limited, Dr Elliman has a broad

background, having taught many aspects of computing

including information systems development, software engi-

neering, distributed systems and hybrid simulation. As a

Chartered Engineer, he has provided software consultancy

services to government, academic and private sector

organizations, including DERA and the EU. He has co-

organized mini-tracks in conferences such as AMCIS and

EMCIS and serves as a guest editor for the Journal of

Enterprise Information Management, among others.

Dr Steve Jones is the Head of Information Technology at

Conwy County Borough Council, a Welsh unitary authority

in the UK. In a career of over 25 years as an IS practitioner,

he has worked in several roles, including programming,

systems analysis, project management, systems develop-

ment management, consultancy and IT Departmental

Head. His work has been predominately in the public sec-

tor, where he has worked for several large organizations.

He holds a first class honours degree in Business Infor-

mation Systems Management from Salford University, a

Masters with distinction in Information Systems from Liv-

erpool University, and a PhD in Information Systems from

the Information Systems Research Institute at Salford Uni-

versity, which was awarded a five-star (5*) rating from the

2001 Research Assessment Exercise, the highest achiev-

able. He is a Chartered Engineer and Chartered Informa-

tion Technology Professional. He is a member of the British

Computer Society, Chartered Institute of Management and

Society of Information Technology Managers. He was

instrumental in establishing an IT benchmarking initiative

for IT Departments in Welsh unitary authorities and under-

took preparatory work to define Key Performance Indica-

tors (KPIs) for the IT service in local government. Dr Jones

has undertaken guest lectures at several UK universities in

areas such as IS management, evaluation, systems anal-

ysis, project management, e-government, ethics and law.

He is a member of several editorial boards and academic

and practitioner working groups. Dr Jones has edited inter-

national journals and written book chapters, international

journal and conference papers. He serves on international

conference committees and has chaired international con-

ference sessions. His research interest is in the area of IT

investment appraisal and evaluation.

Dr Marinos Themistocleous is a lecturer in the Depart-

ment of Information Systems and Computing at Brunel

University. He holds a PhD in ‘Adopting and Evaluating

Enterprise Application Integration’ from Brunel University.

He also holds an MSc in Information Systems Manage-

ment and a Bachelor’s degree in Computer Sciences, both

from Athens University of Economics and Business. Dr

Themistocleous has acted as the prime investigator in a

research project that focused on the integration of Cus-

tomer Relationship Management (CRM) applications in the

London Borough of Havering (co-funded by ORACLE UK).

Dr Themistocleous has close relationships with industry

and has worked as a consultant for the Greek Ministry of

Page 21: Evaluating e-government: learning from the experiences of two UK local authorities

Evaluating e-government

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

81

Finance (on e-government adoption), the Greek Standard-

ization body (EDI expert), the Greek Federation of SMEs

(e-Business expert), the ORACLE Greece (responsible for

the localization of ORACLEs ERP system) and ORACLE

UK (on CRM integration in e-government). Dr Themisto-

cleous has co-authored three teaching textbooks on Elec-

tronic Commerce and Distance Learning, published

several internationally refereed journal papers and

received citations of excellence. A few of his articles are

published in A-class journals including the European Jour-

nal of Operational Research and Journal of Information

and Management. He has also published more than 35

research papers in international conferences. During the

last few years he has co-organized mini-tracks in confer-

ences such as HICSS, AMCIS and EMCIS and serves as

a guest editor for the well-established Journal of Enterprise

Information Management (special issues on e-government

and systems integration). Also, he acts as an international

reviewer for research proposals submitted to the European

Union.

APPENDIX 1: ANALYSIS FOR THE FIRST REVISION TO THE CONCEPTUAL FRAMEWORK

To derive the second framework (CF2) for public sector, the observations on case one weresorted into different conceptual groups. The following table shows the concepts identified withan indication of the statements exemplifying the concept. Note that the concepts within theframework are neutral, but the data from the case might support, qualify or deny the concept.

Culture and structure Investment decisions Evaluation methods

Constraints: Available resources

(poor, reduced headcount)

Departmental structure

(idiosyncratic objectives)

Attitudes: To technology and IS(progressive, IS mission critical

but lacks vision)

To risk

(risk adverse but poor risk

management)

To corporate planning(overtly rational, but covertly

parochial negotiation,

competition for status

and resources)

To collaboration(reluctance to share

information, lack of integration,

corporate project inertia)

Decision maker’s roles: Primary decision(IS department, Head of IT)

Senior management(difficult to engage, lack of

interest & time)

Consultation

(none – Head of IT knows

the needs)

Factors and sources: Main drivers(intuitive & political, cost and BV

agenda, individual motivation)

Direct costs(from IS vendors or accountants,

rarely challenged)

Non-cash benefits

(intuitive or assumed)

Indirect costs

(often excluded)

Dis-benefits(not identified or considered)

Other issues: Future developments(proactively educating users would

be useful)

Concept of value: Financially based

(of little value, competitive

basis irrelevant)

User opinion

(largely ignored, historically not

seen as valuable)

Post hoc evaluation: Importance(important, enables problems to

be resolved)

Why done

(response to significant problems)

Why not done(lack of understanding, too many

techniques, not necessary, too

difficult, too costly, resource

intensive)

Who does it

(internal audit)

Outcomes

(no measure of anticipated benefits,

no organizational learning)

Choice of methods: Methods used(broad estimates of costs)

Methods not used

(ones that assess dis-benefits or

user opinion)

Page 22: Evaluating e-government: learning from the experiences of two UK local authorities

Z Irani et al.

© 2005 Blackwell Publishing Ltd, Information Systems Journal 15, 61–82

82

APPENDIX 2: ANALYSIS FOR THE SECOND REVISION TO THE CONCEPTUAL FRAMEWORK

The following table shows the analysis of statements from case two using and extending therelevant concepts from the second framework. The designation indicates a new concept notidentified in case one.

Culture and structure Investment decisions Evaluation methods

Constraints:

Departmental structure

(hierarchical structure, three

geographical areas, confused

responsibilities, inconsistent

practices)

IS/IT literacy

(low in senior management)

Attitudes:

To technology and IS

(see IS/IT literacy above)

To corporate planning

(departmental autonomy,

lack of cohesion)

To collaboration

(failed attempts to use IS

to standardize practices)

Decision maker’s roles:

Primary decision

(second or third level)

Senior management

(not interested in IS details)

Consultation

(key stakeholders will be ‘sounded

out’ unofficially, chance meetings,

no official forum to discuss

investment, user opinion

not valued during investment)

Decision process

(project champion to manage the

politics, no procedures, rules

made up, a business case is

inaccurate and not worth the

expense)

Factors and sources:

Main drivers

(decisions often political, arbitrary

gut feel or act-of-faith, common

sense – e.g. obsolete IS; new

working patterns; potential

savings; new legislation or policy)

Non-cash benefits

(perception that IS can resolve a

problem)

Other issues:

Perceived IS cost

(excessive cost for

benefits anticipated or delivered)

Concept of value:

Financially based

(inappropriate, irrelevance of

economic evaluation metrics

in the public sector domain)

User opinion

(unofficially canvassed,

not formalized or documented)

Post hoc evaluation:

Importance

(it should be done but it is not,

informal IS evaluation occurs

all the time)

Why done

(because it is central to BV,

to compare impact against

the original objectives)

Why not done

(lack of resources, an

appropriate methodology

is needed, not sure what

is required, lack of retained

knowledge)

Choice of methods:

Methods used

(crude notion of how to evaluate

IS)

Criteria

(quality of service far more

important than costs, should

not be based upon any

economic model, should

be the stakeholders’ view)