Top Banner
Canadian Journal of Development Studies, , , Process Evaluation: A Field Method for Tracking Those Elusive Development Results Sheila A. Robinson, Philip Cox Ivan G. Somlai, Alice Purdey & Bhisma Raj Prasai This paper presents process evaluation methodology (PEM), a field-based participatory evaluation methodology, and its application in the Nepal Health Development Project (NHDP) funded by CIDA. The authors assert that Process Evaluation is an effective evaluation tool for use within a results-based management (RBM) framework. In parti- cular, it is applicable to development programs that intend to enhance the capacity of individuals, organizations, and systems. Sheila A. Robinson was the Canadian Coordinator of the Nepal Health Development Project (1989-1995) at the University of Calgary and Chair of the Canadian Society for International Health. She now serves on the International Advisory Committee for the Canadian Nurses Association, and the Board of the Canadian Consortium for International Social Development. Her experience and research interests are in community-based health development, human resource development of community health workers/nurses, women’s health, and participatory evaluation methods. Philip Cox was the Canadian-based Research Assistant for the NHDP, and coordinated the implementation of the Process Evaluation. He has extensive experience in community development, development education, participatory planning and evaluation, and is currently developing field- based tools for RBM as a consultant for the firm PLAN:NET 2000 Ltd. Ivan G. Somlai was the Management Advisor for the NHDP in Nepal (1990-1995). He has evaluated and monitored 17 projects in Nepal and is currently coordinating international partnership projects in a variety of sectors from the University College of the Cariboo. His interest lies in intercultural management and ethnobureaucracy. Alice Purdey was the Community Health/Community Development Advisor for NHDP (1992- 1995). She is a nurse-anthropologist with experience in community health development in Canada and Latin America. Her interests are in Participatory Action Research (PAR), and Participatory Rapid Appraisal (PRA) with particular application to women’s health and empowerment. Bhisma Raj Prasai was the Nepali Coordinator for the NHDP (1988-1995) at the Institute of Medicine. He was a pioneer in Nepal in developing relevant community medical education at the Institute of Medicine. As Dean of the IOM he championed community health development with the MOH. He has worked in partnership with the University of Calgary since the early 1980s and has extensive experience with project management and health development. He is also a fine poet.
30

Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

Jan 11, 2023

Download

Documents

Iman Moazzen
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

Canadian Journal of Development Studies, , ,

Process Evaluation : A Field Methodfor Tracking Those Elusive DevelopmentResults

Sheila A. Robinson, Philip CoxIvan G. Somlai, Alice Purdey

& Bhisma Raj Prasai

This paper presents process evaluation methodology (PEM), a field-based participatoryevaluation methodology, and its application in the Nepal Health Development Project(NHDP) funded by CIDA. The authors assert that Process Evaluation is an effectiveevaluation tool for use within a results-based management (RBM) framework. In parti-cular, it is applicable to development programs that intend to enhance the capacity ofindividuals, organizations, and systems.

Sheila A. Robinson was the Canadian Coordinator of the Nepal Health Development Project(1989-1995) at the University of Calgary and Chair of the Canadian Society for InternationalHealth. She now serves on the International Advisory Committee for the Canadian NursesAssociation, and the Board of the Canadian Consortium for International Social Development.Her experience and research interests are in community-based health development, human resourcedevelopment of community health workers/nurses, women’s health, and participatory evaluationmethods.

Philip Cox was the Canadian-based Research Assistant for the NHDP, and coordinated theimplementation of the Process Evaluation. He has extensive experience in community development,development education, participatory planning and evaluation, and is currently developing field-based tools for RBM as a consultant for the firm PLAN:NET 2000 Ltd.

Ivan G. Somlai was the Management Advisor for the NHDP in Nepal (1990-1995). He hasevaluated and monitored 17 projects in Nepal and is currently coordinating internationalpartnership projects in a variety of sectors from the University College of the Cariboo. His interestlies in intercultural management and ethnobureaucracy.

Alice Purdey was the Community Health/Community Development Advisor for NHDP (1992-1995). She is a nurse-anthropologist with experience in community health development in Canadaand Latin America. Her interests are in Participatory Action Research (PAR), and ParticipatoryRapid Appraisal (PRA) with particular application to women’s health and empowerment.

Bhisma Raj Prasai was the Nepali Coordinator for the NHDP (1988-1995) at the Institute ofMedicine. He was a pioneer in Nepal in developing relevant community medical education at theInstitute of Medicine. As Dean of the IOM he championed community health development withthe MOH. He has worked in partnership with the University of Calgary since the early 1980s andhas extensive experience with project management and health development. He is also a fine poet.

Page 2: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

Dans cet article, les auteurs présentent la méthode d’évaluation de processus (MEP),une méthode d’évaluation participative sur le terrain, et son application dans le cadred’un projet de développement de la santé mené au Népal sous le financement de l’ACDI.De l’avis des auteurs, l’évaluation de processus constitue un outil d’évaluation efficacedans le contexte de la gestion axée sur les résultats (GAR). Cette méthode d’évaluationpeut s’appliquer en particulier aux projets de développement destinés à accroître lescapacités de personnes, d’organisations et de systèmes.

Under previous management approaches of the Canadian InternationalDevelopment Agency (CIDA), implementing agencies were primarily responsiblefor delivering inputs and producing outputs.1 The results-based management(RBM) approach focuses on being accountable for delivering inputs and achievinglonger-term developmental “results,” defined as “describable or measurable changethat is derived from a cause and effect relationship” (CIDA, 1996). RBM is part ofa worldwide trend toward greater public accountability and a sharper concern fordevelopmental effectiveness (Picciotto and Rist, 1995; World Bank, 1992).

The RBM approach encourages project implementors to think beyond outputs– i.e., identify what difference the project intends to make within its developmentcontext. As such, implementors identify a “chain” of results that extend fromproject inputs through outputs to longer-term impacts. They are challenged toidentify performance indicators – signposts of progress associated with expectedresults. The RBM approach also encourages them to plan to deal with constraints,critical assumptions, and uncertainties. Designers of the RBM approach in CIDArecognize that if it is to successfully navigate the project toward anticipatedoutcomes and impacts, its implementation must be flexible enough to allow forchanges and iteration in project strategy (CIDA, 1997; Rondinelli, 1993).

As part of the trend toward greater accountability and enhanced effectiveness,multi-lateral and bilateral agencies as well as the World Bank are stepping upefforts to document the effectiveness of their practices, by exploring relevantevaluation strategies (World Bank, 1993; Picciotto, 1995). The central challengewithin RBM is to design and implement flexible evaluation frameworks that canprovide continuous monitoring and a regular performance review at all levels ofproject results, including post-project impacts.

1. Inputs represent the resources required, including money, time or effort, to produce aresult. Outputs represent the immediate, visible, concrete and tangible consequences of program/project inputs (CIDA, 1996).

Page 3: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

This paper presents the process evaluation methodology (PEM) developedand applied in a rural health development project, which focused on theenhancement of capacity of local communities and the district health system forhealth development. It delineates lessons learned from this experience, and howPEM fits with RBM. It suggests a “recipe” for using PEM as a monitoring andevaluation tool in a bilateral project.

.

This section explores how capacity-building activities relate to RBM, as well asthe origins of PEM within the project context. It places PEM in the current fieldof evaluation, and presents the groundwork for how PEM fits as a RBM tool.

. - -

Within the new development agenda of sustainable poverty reduction,development assistance has increasingly focused on policy and capacity-building– investment in people and institutions (Picciotto, 1995; ACCC, 1996). Theoutcomes of capacity-building projects are hard to predict. As it is not a linearprocess, “capacity development involves uncertain means and unpredictable ends”and is risky, experimental, messy and long-term – especially in “soft” sectors suchas social or health development (Qualman and Morgan, 1996).

Within the health promotion literature capacity-building is conceived as “thenurturing of and building upon strengths, resources and problem-solving abilitiesalready present in individuals and communities” (Robertson and Minkler, 1994).Sustainable capacity-building for institutions is seen as “the process of creatingnew patterns of activities and behaviours that persist over time because they aresupported by indigenous norms, standards and values” (Eyford, 1991).

As such, capacity-building shifts the orientation of a project from a development“problem” (e.g. inadequate water and sanitation services, high rates ofmalnutrition) to the enhancement of skills and capacities of the people affectedby the problem. In so doing, an implementing organization gives away some ofits control and certainty over the solution and the outcomes.

A results-based approach to evaluation, with its emphasis on iteration andflexibility, should enable implementors to identify incremental changes in thecapacities of people, communities and/or institutions, and facilitate organizationallearning among all stakeholders (ACCC, 1996). As implementors understand thecumulative effects of their capacity-building activities, prospects for a successfuloutcome, i.e. sustainable results, improve. Process evaluation is one vehicle forproviding this intelligence to implementors.

Page 4: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

.

The Nepal Health Development Project (NHDP), a participatory healthdevelopment project funded by CIDA from 1988 to 1995, was implementedthrough an institutional partnership between the University of Calgary’s Divisionof International Development and Tribhuvan University’s Institute of Medicinein Nepal.

NHDP’s design logic pre-dates the current era of RBM. Periodic projectevaluations relied heavily on management frameworks which focused more onoutputs. They yielded useful information on how the project was proceeding inrelation to planned outputs, for example, the number of trained health workerstrained, or the number of villages engaged in a community development process.Although these assessments aptly addressed management and partnership issues,the partnership shared the concern that this approach to project evaluation didnot adequately capture the “results” that lay beyond stated outputs.

To partially address this concern, field staff had begun to identify qualitativeindicators that would help the project track the impacts of trained personnel onthe programs and services of the health stations where they worked, or the changesin the quality of life amongst the villagers of participating communities; however,they realized that broader evaluation framework was needed.

Therefore, in 1993, Nepali and Canadian managers and field staff designed a“process evaluation” methodology which enabled project participants, staff andfunders to understand and follow the “capacity building” process of projectactivities with individuals, communities, and government agencies involved inthe project. Understanding the process of capacity building was seen as essential toidentifying and tracking results.

CIDA supported the application of this new methodology in a comprehensiveevaluation of the NHDP. While the methodology was internally designed, theevaluation was implemented with a team of internal and external evaluators, thelatter approved by CIDA (Justice et al., 1994).

The objective of the 1994 Process Evaluation of the NHDP were: to understandand assess the capacity-building process by which the project achieved results, todetermine results achieved and not achieved, to help the broad range ofstakeholders refine the project’s operational effectiveness, and to enhance thelocal capacity to plan and evaluate in the future.

.

The PEM, as designed for the NHDP, is rooted in models of “participatory” and“empowering” evaluation. Participatory evaluation is concerned with ensuringthat all the stakeholders, and in particular those most affected by the program orintervention, are participating in all aspects of an evaluation – design,implementation, feedback and follow-up. “Empowering” evaluations are so calledwhen the evaluation experience leads to skill development and contributes to

Page 5: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

sustainable human development. Participatory evaluations typically are a learningprocess for all involved (Robinson, 1995; Muller-Glodde, 1991).

Participatory evaluation dates to the 1960’s (Weiss, 1972). It draws on previousapproaches to evaluation, all of which rely heavily, though not exclusively, onqualitative methods in design and data collection. For example, “responsive”evaluation (Stake, 1975) emphasized humanizing the process by placing theprogram stakeholders in the centre, and learning first hand about their concerns.Participation features in the “transaction” model of evaluation (House, 1978) thatlooks at “processes” of change – especially in education – and assumes thatunderstanding comes from an inductive analysis of open-ended data gatheredthrough direct contact with program participants (Patton, 1990).

The PEM design also relates to utilization-focused evaluation (Patton, 1986) inwhich the evaluation design and methods are not pre-determined, but aredetermined from the needs of the stakeholders, as determined in consultationwith them. This strategy of “intended use by intended users” drives the methods.It enables participants to see the strengths and weaknesses of the data, and to usethem subsequently.

In addition to having these roots, PEM is built on another guiding principle.The methodology reflects the theory and practice of development that undergirdthe project itself. NHDP’s own theory of development relates to David Korten’s“third generation” or “sustainable systems” development, that attempts to influenceinstitutional change at multiple levels in support of local capacity building, and inwhich the project constitutes a “social learning organization” (Korten, 1990).NHDP practice includes a participatory action research (PAR) approach to thework with the community and the district health system, and a focus on womenand marginalized caste groups (Purdey et al., 1994; Fals-Borda and Rahman,1991; Smith et al., 1997). In the evaluation literature, the principle of matchingevaluation methodology with the theory driving the project is considered a processapplication issue, one of harmonizing program and evaluation values (Patton,1990).

Process evaluation is unique among other participatory methodologies in thatthe use of a conceptual framework of the project’s development intent, is essentialas a guide to defining results.

.

Process evaluation as an RBM tool aims to examine both the process of capacity-building, and the developmental results achieved. As illustrated in the sections thatfollow, process evaluation methodology enables project managers to make explicitthe “results chain” they expect to set in motion by combining project inputs in asequence of capacity-building activities. By making explicit the results chain, themethodology provides project managers with a frame of reference for askingquestions about project implementation and results. At first, this frame of reference

Page 6: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

is speculative, especially if it is created prior to the start of the project. Asinformation about project implementation is captured through process evaluation,the framework becomes attuned to the actual dynamics at play within the projectsetting.

PEM is capable of generating two outputs: information upon which projectstakeholders can act to improve performance, and indicators or “signposts” ofcapacity building. The latter can be useful for honing future monitoring andevaluation on capacity building “results” important to the project. Based on thestrength of these two outputs, process evaluation itself adds capacity-buildingvalue to projects – especially those set up to learn from experience. This is inkeeping with the “Learning Process Approach” described by Korten (1980).

.

.

Nepal is a country of 20 million people. Bordered on the north by China (Tibet)and on the west, south and east by India, Nepal was rather isolated from theworld until 1954. A traditional monarchy and a one-party, palace-controlledgovernment ran the country until 1990, following a brief popular uprising, amulti-party parliamentary system was established. Through this, there is a growingcommitment to democratization and decentralization in the pursuit of broad-based development goals.

Administratively, the country is divided into 75 districts. Each district comprisesa municipal seat and clusters of 2 to 3 dozen settlements called Village DevelopmentCommittees (VDCs). The Ministry of Health is responsible for providing curativeand preventative services through a network of district facilities that reach to theVDC level. At the time of NHDP, the Institute of Medicine was responsible fortraining programs for most cadres of health workers in the system.

The project developed in response to what the project partners perceived as a“capacity gap” between the Ministry of Health’s stated intentions to raise the levelof health in rural areas, and its actual performance. In essence, the HDP’s mandatewas to assist the Ministry of Health strengthen its capacity to more effectivelymeet the health needs of the rural poor in one isolated district of mid-westernNepal.

.

The PEM focused on the capacity-building experience of three streams of projectactivity: community development, district health strengthening and generalistphysician training.

Page 7: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

1. Community Development

At the time of the evaluation, NHDP was active in five Village DevelopmentCommitees (VDCs)2 of Surkhet District. These VDCs, with a combined populationof 35,000, are remote agricultural communities located in the foothills. Projectcommunity development staff and local facilitators were trained in participatoryaction research and participatory appraisal techniques which they used in theirwork with village groups.3

In all communities, villagers have either organized themselves into groupsaccording to neighbourhood or according to interest – women, health or forestry.Working groups meet regularly and share developments at the VDC level. Initiativesarising from this community development process include irrigation, clean waterschemes, forest conservation, vented stove construction, women’s literacy, micro-enterprise, savings and credit schemes.

As the Project “works itself out” of a community, it assists the various workinggroups to relate to one another across VDCs to form a local “people’s organization.”These legally-recognized self-help NGOs are able to establish cooperatives, accessexternal funds and organize collaborative village development schemes. They canalso better advocate for community interests with government agencies such asthe Ministry of Health. In this project experience, it takes two to three years percommunity to reach this stage.

2. District Health Strengthening

A second stream of activity addresses the delivery of health programs at thedistrict level. The focal point is the ministry of health’s district public healthoffice, which manages health prevention, promotion and curative services througha network of rural health facilities.

The Project’s aim was to strengthen the Ministry’s capacity to operate in adecentralizing bureaucracy. NHDP assisted Ministry staff to develop informationgathering systems and methods for planning, managing and coordinating districthealth activities, and improve the functioning of the outlying health posts, thetwenty-bed district hospital, and the referral system that links them. Activitiesincluded participatory needs assessment of staff, in-service training, and the

2. The Village Development Committee is a political administrative unit consisting of up tonine villages.

3. Participatory appraisal has its roots in participatory action research (PAR) and rapid ruralappraisal (RRA). It features an interdisciplinary group assessment process in a style that usesmultiple techniques for data acquisition and analysis. It is people-oriented and locale-specific. Itpursues an increasingly accurate understanding through rapid rounds of field interaction. PAR, inparticular, places the “subjects” of research at the centre of the design and implementation process.It taps on local knowledge, combines it with modern scientific expertise and provides implementerswith useful information to guide action (Fals-Borda and Rahman, 1991; Smith et al., 1997).

Page 8: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

development of drug schemes at health posts. In addition, project staff collaboratedin the training of female health staff, traditional birth attendants and communityvolunteers. At the national level, the project contributed to health policy andplanning, through participation on national task forces for district health services.

3. Training of Generalist Physicians

The project coordinated a three-year post-graduate general practitioner trainingprogram through the new Faculty of General Practice, developed through anearlier project partnership, at the Institute of Medicine. Most rural districts havepoorly equipped hospitals and medical doctors without the requisite skills andsupport services to perform emergency and obstetric surgery. The program aimsto place specially prepared generalist physicians with appropriate clinical andmanagerial skills into remote district hospitals. Consistent with the institutionalcapacity-building goal of the NHDP, the residents who come to the programfrom within the Ministry of Health, upon completion, are posted to a governmentdistrict hospital.

.

The Process Evaluation methodology as designed and tested in the NHDP hasfour key elements:

a conceptual model to examine capacity-building;participatory strategies;an interdisciplinary team; anda qualitative approach to design, indicator development and field work.

None of these elements is new to the world of evaluation. Yet, combined, theyoffer an accessible, action-oriented assessment tool – especially for capacity buildingprojects – which addresses the need for a mechanism to determine and track“results” at the outcome and impact levels.

.

Capacity-building projects emphasize matching participants’ needs andcompetencies with human and financial resources: staff, equipment, supplies andtime, and then transforming this collection of inputs into plans and activitieswhich build human and organizational capacity (ACCC, 1996).

Those responsible for implementation are keen to know how activities generateknowledge, attitudes and skills, and how the learning in turn influences otherswho are not directly involved. Further, they want to know whether learning willchange the way things are done in an organization or community, and whetherthe changes are sustainable, i.e. what results are achieved at the outcomes andimpacts levels.

Page 9: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

Often the environment in which the project operates affects the potential forsustainability. Accordingly, implementors want to understand social, economic,political, administrative, cultural and other cross-currents that enable or constraincapacity-building results. A conceptual model that addresses these issues can beused to link the evaluator(s) to the logic of the project design.

The model developed in the NHDP and described below is a generalized,visual representation of the relationship between project activities and theircapacity- building intent. It sets out a sequence of broadly described results thatproject designers might anticipate in a capacity building project. At the same timeit makes clear a set of key questions to guide the evaluator along this progressionof results. When applied, the model helps the evaluator relate observations towhat the project or activity is trying to achieve. It provides guidance in the searchfor performance indicators of capacity-building and enables project managers tobuild on their understanding of the project and its context, and make betterdecisions.

Core Evaluators1 Nepali3 North Americansinterdisciplinaryevaluation experiencesteady commitmentoverall responsibility

Counterpart Evaluators22 Nepalis18 from villages3 from District Health1 from Kathmandu (MOH)specialized contextual knowledgeshort intensive commitment

Evaluation SupportHDP Management and Field Staff9 Nepalis3 North Americansextensive HDP experienceinterpretation/translationlogistics support (5 additional Nepalis)

Figure 1. Composition of the HDP Process Evaluation Team

➤➤

➤➤

.

Participation is the cornerstone of any effective field-based evaluation. The peoplewho need most to know how well or how poorly project activities build capacityare those who carry them out. For managers of a project, being involved inunderstanding and assessing the capacity-building process and its results helpspredict which future actions will advance or impede the progress toward theexpected results.

Page 10: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

Process Evaluation methodology recognizes the wide range of stakeholders inany project (e.g. villagers, local officials, field staff, project managers, governments,funders/sponsors). In order to be an effective guide to these stakeholders, evaluationhas to be relevant to each of their varying information needs.

For evaluation to be relevant and participatory, representatives from stakeholdergroups have to be genuine participants from the design to the reporting phases ofthe exercise. Where this is possible, the stakeholders have a greater sense ofownership and accountability. As a result, they are more likely to respond to theevaluation findings and recommendations. Techniques employed to enable genuineparticipation, especially for pre-literate participants, include seeking representationfrom stakeholder groups as co-evaluators, and using story-telling, role plays, andgender-specific groupings.

.

The NHDP evaluation used a core team of four evaluators, three of whom wereexternal to the project but had worked extensively in health development inNepal. As an interdisciplinary team, it gathered expertise in evaluation, communityhealth and medicine, cultural anthropology, health economics, social policy andcommunity development. It was joined by counterpart evaluators representingthe District Health Office, the District Hospital and the three VDCs participatingin the exercise (six villagers per VDC). The joint team in turn was supported byNHDP staff and representatives of the major stakeholder organizations: theInstitute of Medicine, the University of Calgary and the Ministry of Health.Altogether, there were 26 evaluators and 12 staff participants –- a logistical feat.The composition of the evaluation team is illustrated in figure 1.

Daily debriefings are essential to order and synthesize the information thatrapidly accrues. At these sessions, the benefits of interdisciplinary teamwork becomeclear as members contribute their various perceptions, often complementing eachother’s insights to build a better understanding. Sometimes, when individualperceptions clash, the team decides whether more information is needed on thesame topic and, if so, it plans the agenda accordingly.

This tripartite approach worked well: staff supplied insights into the operationsof the project, villagers and Ministry of Health officials provided an understandingof the context of the project, and the core evaluators contributed their own“expert” disciplinary perspectives as well as a technical understanding of evaluation.The benefits of using an internal team with external consultants is well supportedin the literature (Aaker and Shumaker, 1997; Muller-Glodde, 1991).

.

A qualitative approach is inherent to a methodology that evaluates human andorganizational capacity-building. The conceptual model, key questions andindicators lend themselves to a qualitative approach to data gathering and indicator

Page 11: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

development. For example, quantitative information alone cannot address thequestion of sustainability, which is intrinsic to the goal of capacity-building projects.While visible quantifiable outputs such as latrines, literacy, and trained physicianscan and should be counted as indicators of progress, such information must be balancedwith qualitative information such as community perceptions of breakdown of casteand gender barriers to participation and benefitor, increased confidence among womenin commercial activities due to improved numeracy skills.

Participatory appraisal provides a toolbox of techniques to help interdisciplinaryteams function effectively and efficiently. These techniques, such as semi-structuredinterviews, focus groups, incidental interviews, social mapping, group treks, andmany others, help teams talk with local people and other team members andlisten to them, as well as observe local conditions and study pre-existinginformation (Davis-Case, 1990). These techniques also enable team members towork together, while being individually able to exchange their interpretations ofthe same observation.

. -

Capacity-building “results” accumulate slowly, iteratively and unevenly. The spiralmodel, conceived by the NHDP, embraces the full life-cycle of a project or activity –from inputs to outputs, to outcomes and impacts.4 As a field tool, it helps evaluatorstrace and understand the results chain that can link a mere idea to a sustainableimpact. The conceptual model also illuminates constraints and enablers that canalter or cut short that results chain. The three important attributes of the conceptualmodel are: its visual representation, its generic applicability to a single activity orto the project as a whole, and its key questions which help the evaluator seek out,test and verify indicators of capacity-building for each stage in the project’s life-cycle.

.

The conceptual model developed for this capacity-building project was based onthe assumption that new knowledge, skills, and attitudes influence ever largercircles of people within an organization, institution or community. In so doing,capacity-building represents the means by which a project achieves its purpose –in the case of the NHDP a closer “fit” between consumer need and health servicedelivery.

This spiral model is represented schematically in figure 2. The figure shows aspiral in a box. The spiral is narrow at the bottom and becomes wider as it windsupward and outward to represent the wider reach. At the bottom of the schematic

4. This model emerged during the design phase through discussions among project personnelin Canada and Nepal.

Page 12: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies CHANGE BECOMES INSTITUTIONALIZED

CHANGE IN THE WAY THINGS ARE DONE

CHANGES IN INDIVIDUALBEHAVIOUR

KNOWLEDGE SKILLSAND ATTITUDES

ACTIVITIES

PLAN

LATENT OR UNIDENTIFIEDPROBLEMS/ISSUES

INFLUENCE OF PEERS INWORKPLACE AND COMMUNITY

EXPOSURE/IDEAS

CONSTRAINTS CONSTRAINTS

ENABLERS ENABLERS

Figure 2. Spiral Model of Capacity Building

Page 13: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

INSTITUTIONALIZATION

DIFFUSION

LEARNING

PLANNING &

ORGANIZATION

MOBILIZATION

CONSTRAINTS CONSTRAINTS

ENABLERS ENABLERS

Figure 3. Zones of Capacity Building

Page 14: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

is the initial exposure to problems and ideas. As the ideas are discussed, theygenerate enough support to be transformed into a plan of action. Contained inthe plan is one or more activities. The activities of a capacity-building processbring groups of people who can address the problems and ideas into directcontact with those organizing the activity. Once in contact, existing knowledge,skills and attitudes are sharpened and new knowledge, skills and attitudes acquired.

From this point on, changes in knowledge, skills and attitudes begin to affectever widening circles of people, leading to corresponding changes in individualbehaviour. In reality, the relationship between knowledge, skills, attitudes andbehaviour is more a circular one, with any one dimension able to influence theother. Notwithstanding, changes in behaviour, exhibited by the person(s) directlyinvolved in the activity, influence changes in their own immediate workplace orcommunity settings. This leads to concrete changes in the way things are done.Others start to notice the changes and, if they like them, support the new ways ofdoing things. Indeed, this level of support increases to a point where the changesbecome institutionalized – a part of the way things are usually done.

The designers delineated five “zones” of capacity-building to simplify thedevelopment of indicators and the presentation of findings: mobilization, planningand organization, learning, diffusion and institutionalization. Specific indicatorscan then be identified and tagged to each of these zones of capacity building. Thezones overlay on the spiral and reflect aspects of the capacity-building processdetailed in figure 2. As illustrated in figure 3, these zones of capacity-buildingoverlap. Learning, for example, takes place throughout a large portion of thecapacity-building activity.

As illustrated by the complexity of the NHDP, there are within each of thethree streams a multitude of activities. Some activities are large scale, some aresmall; some are slow to come to fruition; and some are much faster to take hold.It is intended that each activity in some way contribute to the achievement of thepurpose of the NHDP.

The box containing the spiral represents the environment of inherent constraintsand enablers. There are two major kinds of constraints on the capacity-buildingprocess – internal and external. Internally, the transition from one phase of thecapacity-building process to the next is by no means a certainty. The upwardspiral of capacity-building is rarely – if ever – a regular, smooth flow. For example,a process might get off to an inappropriate start as a result of developing an ideathat does not squarely address the problem. Later on in the spiral, particularpeople chosen for the activity may, for one reason or another, be unable to makeuse of the activity to bring about the desired change. Conversely, the appropriatepeople might be involved, but the activity may be wrongly designed orimplemented.

The second kind of constraint is imposed from outside of the project. Infigure 2, the spiral starts well within the confines of the box but as the idea

Page 15: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

develops into an activity and the stakes increase, the spiral begins to push againstthe outside forces. Sometimes, the outside forces (e.g. natural disaster, politicalupheaval) can be so overpowering that they close in on the capacity-buildingactivity and slow or stop its progress. Other times, the capacity-building processcan be managed in such a way that the externally imposed constraints are reduced– that is, the spiral pushes the box outwards. The same environment that posesconstraints on a capacity-building process can also contain enabling factors which,if taken advantage of, can help the activity achieve its purpose. In this model, therelationship between the spiral and the box is dynamic – one can influence theother, and the nature or strength of the influence can change.

.

Key questions emerge from this model to guide the evaluation’s inquiry withineach stream of project activity. Questions used in the NHDP evaluation arebelow. Using questions like these as a guide, evaluators can examine a variety ofactivities within a project. They can also consider the extent to which projectactivities reinforce each other, create spin-offs and/or unintended results, andmove the project toward its overall purpose.

Box 1. Key Evaluation Questions

1. What was the problem or issue? What triggered it? Who identified it?

2. How did the idea to address the problem/issue arise? Who raised it?

3. How was the idea transformed into a plan of action?

4. Was the planned activity congruent with the problem/issue? How so? How not?What resources were deployed and how?

5. Did the participants in activity “x” generate knowledge skills and attitudes necessaryto strengthen their immediate workplaces (e.g., health posts, district hospitals) orcommunity groups? If not – why not?

6. Did the participants’ peers in these organizations or community groups receive andadopt/adapt the knowledge, skills, attitudes and behaviours generated in activity“x”? How? If not, what happened?

7. Did changes take place in the organization or community as a result of theknowledge, skills and attitudes generated in activity “x”? What changes? Whatimplications (e.g., costs and benefits)? If not, why not?

8. How, if at all, did changes become institutionalized in the organization or community?

9. How did the end-users of the organization or community group benefit from thechanges originally resulting from activity “x”, i.e., results?

10. What external factors impeded the capacity-building process? How?

11. What has been and can be done (and by whom) to counter these factors?

12. What external factors helped the capacity-building process? And how?

13. What has and can be done (and by whom) to take greater advantage of thesefactors?

14. What has to happen next? What has to happen before the objective of the activitycan be met? Where must you return to in the spiral?

Page 16: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

.

The following is a breakdown of how the Process Evaluation was carried out inNepal. Activities are divided into three stages: Preparation and Orientation,Information Gathering, and Synthesis and Reporting. The time span and thenumber of person days required to carry out the activities in each stage are listedunder each heading. These indicate that the first and the last stages of the evaluationspanned the greatest amount of time, but that the Information Gathering stage,while only lasting two weeks, required the greatest investment of person days.

.

Preparation: Span of Time = 4 months, Amount of Time = 70 person days (12% oftotal person days)Orientation: Span of Time = 2 weeks, Amount of Time = 30 person days (6% of totalperson days)

The Preparation and Orientation stage preceded the actual evaluation fieldwork, and substantially exceeded the time allotments typical of a more traditionalevaluation. This was due, in large measure, to it being an experimental evaluationrequiring pioneering effort in its design – clarifying the concept and determiningthe methodology. More importantly, however, participatory evaluations, such asthis, require more front-end preparation than those of a typical team of twoshort-term consultants. As the NHDP experience indicated, this front endinvestment of resources can pay off later in the enhanced potency of the fieldwork and its transformation into lessons and future actions.

In the case of the NHDP, senior management, first rationalized, conceptualized,sought funding for (CIDA) and staffed. Second, the project managers initiatingthe evaluation engaged field staff in the design and planning of the evaluation.Third, field staff, once engaged in the process, in turn, engaged other keystakeholders such as villagers and health personnel. Finally, once assembled as thefull evaluation team, the players got to know each other in orientation workshopsand learned to work together. The latter constituted the first step in theimplementation of the evaluation.

.

Information Gathering: span of time = 2 weeks, Amount of Time = 350 person days(62% of total person days)

The core evaluators, with their counterparts, spent two weeks collectinginformation. They met key government officials in Kathmandu, district officials,hospital and health post staff around Surkhet District and villagers in three of thefive participating VDCs. With the help of project staff, the team singled out keyquestions and usually designated one or two members to lead the questioning.Any interviews with high level officials were formal and planned ahead of time.

Page 17: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

More informal techniques were used when meeting with villagers and healthpractitioners.

Villagers in their preparation chose to use “community maps” of theirneighbourhoods to show the visible results of the NHDP’s communitydevelopment process. The maps portrayed such features as: latrines, vented stoves,irrigation projects, neighbourhood water taps, beehives, bamboo plantations,reforestation zones and health facilities.5 The authors (many previously illiterate)signed their names at the bottom of each map. Some groups used symbols intheir maps while others accurately represented every single house, stove, latrine,etc. In one VDC, members of the neighbourhood groups took the additional stepof analyzing their maps from a “before NHDP” and “after NHDP” standpoint,and presented the results on a separate flip-chart sheet. An abridged version ofthis chart is shown in figure 4.

5. The maps were created by the villagers themselves using a participatory appraisal technique,a user friendly methodology for illiterate and semi-literate populations.

Figure 4. Before/After NHDP Chart prepared by Villagers of Babiyachaur

Before NHDP After NHDP

Most people used thumb print 90% of the people can sign their nameto sign name

No vented stoves 210 households have vented stoves

No more than 4 latrines in village 50 latrines in use

Women were not permitted Mostly women participateto attend group meetings in the meetings

Ordinary people (non-high caste) People feel comfortable to talknot accustomed to talking with everybodywith outsiders

Ordinary people did not know All the banking papers are keptabout banking by the village groups themselves

Tailoring was only done by Anyone interested does trainingDamai caste

Money lenders charged up to Villagers have their own savings60% per annum interest on loans and loan program (low interest)

Little contact with government Organized for services of line agenciesand non-government service agencies

No irrigation ditch for kitchen garden Ditch for kitchen garden completed

Page 18: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

The core evaluators used this information as a “springboard” for theircommunity research. They huddled around the social maps with the counterpartevaluators and heard how the various activities unfolded – how the villagersidentified the problems, arrived at a solution, found the resources and organizedthemselves to carry out the work. Having gained an understanding of villagedevelopment, evaluators were then led about the village by their hosts to observeoutputs and discuss longer term outcomes with small groups in informal interviewsand incidental encoutners

.

Synthesis and Reporting: Span of Time = 4 months, Amount of Time = 110 persondays (20% of total person days)

Intrinsic to process evaluation is the idea that all stakeholder groups participatingin design and research should also be part of a report-back process. Thus, in eachVDC, the visit ended with a village assembly, where the “counterpart” evaluatorsfrom the community chaired the feedback session. And prior to leaving Surkhet,NHDP field staff organized a one-day debriefing meeting for all those participantswho had attended the initial Surkhet orientation two weeks earlier – the villagecounterpart evaluators, Ministry of Health officials, NHDP staff and coreevaluators.Another feedback session was held in Kathmandu with members of the ProjectSteering Committee, following which the evaluators revised the reports and usedthem as a basis for a draft document. The draft was then circulated amongproject staff and submitted to CIDA for comment and action before being finalized.

.

The key questions associated with each zone of the conceptual model became theorganizing elements around which the data was compiled. In an iterative manner,the evaluators discussed, anticipated and then observed what kind of capacitybuilding results did/should emerge within each stream of activities. Then thoseobservable results judged to be generically indicative of changes in attitude,behaviours and practices in each zone of capacity building were documented firstin a matrix, and then on the spiral. This qualitative, inductive process of analyzingfindings yielded the “indicators”– signposts alerting the evaluator to changes incapacity. The changes reported by the Babiyachaur villages in figure 4 illustrate asampling of the abundance of indicators that could be tracked in the communitystream of activities. While these indicators were useful for assessing the project inits current state, they also served to refine the conceptual model thereby furtherassisting the project to track its future capacity building efforts.

On the basis of evidence provided through these performance indicators, theevaluators concluded that results in the capacity building process had been uneven

Page 19: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

Figure 5. Indicators – Community Stream

Figure 6. Indicators – District Health Stream

• Change - codes of conduct• Established cooperative

• Replication• Interaction effects• External requests• Expansion teams

• Access to new resources• Technical skills• Organizational skills• Shifts in power

• Plans of action• Cross village discussion• Gender and caste balance• New use of resources

• Widening participation• Inclusiveness• Willingness to meet• Individual curiosity

Institutionalization

Difussion

Learning

Planning& Organizing

Mobilization

• Replication by MOH• MOH/NGO collaboration• Policy change in MOH• Change in management

• Identifiable “products”• Requests for “products”

• Peer learning• Follow-up application• Skill development• Congruent with need

• Collaboration with others• Learner participation• Priority need

• Joint vision and plan• Work with the MOH• Rapport building time

Note: Indicators written in italics represent findings/situations anticipated but not observed by the evaluators

Institutionalization

Difussion

Learning

Planning& Organizing

Mobilization

Page 20: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

across the three streams of project activity - community, district and physiciantraining. The project, in this first phase, had been more successful in achievingoutcome and impact level “results” within the VDCs, than it had within thedistrict health system.

A visual depiction of the spiral model was used to illustrate project results inrelation to the purpose of the Project. Figures 5 and 6 provide a samplerepresentation of the findings for the community and district streams of activity.The spiral is positioned on the right-hand side of each figure. Indicators ofcapacity-building, corresponding to the five zones of the capacity-building process– mobilization, planning and organization, learning, diffusion and institutional-ization, are listed on the left-hand side. Indicators written in plain bold textrepresent findings observed by the evaluators. Indicators written in italics representfindings/results anticipated but not observed by the evaluators; in other words,results not achieved (as yet). Those indicators shown here are composite indicatorsthat have a number of sub-indicators, or particular quantifiable/observableevidence associated with them. For example, the indicator “gender and castebalance” has data associated with gender and caste ratios in group meetings, inleadership roles, and in participation in activities.

. -

On the basis of clear evidence of “results” in the community stream, the evaluatorsconcluded that initiatives are on the brink of sustainability (the impact level), andneed short-term support to consolidate independent, pro-active local organizations.The spiral diagram (figure 5) displays the indicators evident in each zone ofcapacity-building up to and including institutionalization.Thus, evidence suggests that the community development capacity building processhas resulted in: heightened level of confidence among villagers, a stronger senseof community identity, a vigorous democratic decision-making structure, andvillagers’ community leadership capability (e.g., problem solving, conflictresolution). These are the outcome level results.

External Influences on Community Capacity-building

As part of presenting the findings, the evaluators noted major constraints andenablers influencing the capacity-building process at the community level. Forexample:

Ecological issues (enabler) – Villagers, acutely aware of the disappearance ofprotective vegetation and soil erosion were motivated to form forestry groupswhich didn’t exist before. These groups established relationships with the Ministryof Forestry, designated zones for reforestation, planted trees, hired wardens andestablished village bylaws (with enforcement) to control access and use.

Page 21: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

The country’s political and administrative situation (as both an enabler and aconstraint). Democracy in 1990 created a new openness toward local-level planningand management. But, embedded features of the administrative systems continuedto constrain the process, notably the lack of readiness/ability of the line agencies,including the Ministry of Health, to implement existing policy regarding localparticipation.

. -

The evaluators found that the district management strengthening initiatives, whilewell planned and received by the Ministry of Health and others, have not yieldedsustainable results to date. Figure 6 reflects the evaluators’ observations that whilethere is some diffusion of learning as a result of project activities, there does notappear to be lasting change in the way the district health system functions.

In general, the district health initiatives did yield successful activities, andresults at the output level. Examples include: annual report writing, training ofTraditional Birth Attendants and efficiently functioning drug schemes in all healthposts in the District. There is evidence of some diffusion in that in-service trainingpackages designed for health post staff have been used beyond Surkhet; in fact,the Refresher Curriculum for Auxilary Nurse Midwives was adapted in 1997 bythe National Health Training Centre. The evaluators also observed improvedcollaboration and coordination among regional and district health professionalsand community level staff.

External Influences on District Capacity-building

A number of external constraints clearly influenced the district stream of activities.The lack of diffusion in building institutional capacity in the Surkhet district wasattributed largely to frequent transfers of staff in and out of the district healthsystem, sweeping changes to the organizational structure of the Ministry of Health,and continuing scarcity of financial resources for health programming.

. -

The findings, presented in graphic form allowed recommendations to beimplemented quickly, and demonstrated how such a methodology could be usedto improve project effectiveness. As it turned out, the earlier phases of the synthesisand reporting accomplished most of what was expected of an evaluation, whilethe final document served as polished reference material.

The findings of this evaluation contributed strongly to the design of the follow-on project (NHDP-2) – in a significant change in the way the District Healthstrengthening component would be addressed, and in the capacity building supportproposed for a local NGO to expand the community development work with thesame methodology. Since the evaluation, several other groups in Nepal (BritainNepal Medical Trust-BNMT; Swiss Health Development Project; and the

Page 22: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

Community Health Development Project of the United Missions to Nepal- CHDP-UMN) have used the Process Evaluation Methodology for their own monitoringand evaluation activities. The methodology has also been used elsewhere, e.g., ina recent social forestry project in Indonesia and a community-based water andsanitation project in Myanmar,6 where an effort is being made to utilize themodality from the outset. It has also been adapted for use in a number of Canadiansettings, e.g., in community health programs of the B.C. southern mainlandRegional Health Authority.

.

This section first outlines the specific lessons learned in each of the four elementsof the Process Evaluation Methodology. It then identifies the advantages andchallenges for the methodology vis-à-vis the Result-Based Management. Finally,it presents a general “recipe” for using Process Evaluation as a RBM monitoringand evaluation tool for a typical five year project.

.

This first application of the Process Evaluation methodology yielded a number oflessons that are itemized under the four characteristics of the methodologydescribed above.

1. Use of a Conceptual Model

• The model can be used to analyze a single activity, multiple activities, or theproject as a whole.

• The narrower the scope of the analysis, the potentially deeper the analysis.• The team must be comfortable with all conceptual tools prior to field work;

preparation and orientation are critical for understanding the conceptualmodel and translating it into a specific evaluation plan.

2. Reliance on Participatory Strategies

• Participatory design and management requires good rapport, communicationand overall coordination.

• In Process Evaluation, people’s participation itself builds individual capacityin team work and in evaluation skill.

• Staff can offer a depth of understanding about subject matter. However, staffcan sometimes be put in a compromising situation and might inhibit researchactivities and/or the perceptions of non-staff evaluators.

6. Kaltim Social Forestry Project, a CIDA-funded bilateral project executed by the UniversityCollege of the Cariboo with Universitas Mulawarman; UNCHS–UNDP Human DevelopmentInitiative, Myanmar.

Page 23: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

• Participatory strategies reduce the usual feelings of threat with evaluation,since they can bridge cultures, staff with non-staff, and local with external.

• Participation of local people as evaluators allows questions to be translatedinto village-level terminology and seems to increase the comfort level in thediscourse that follows.

• Evaluators cannot assume that all stakeholders are able/accustomed to analyzesituations in a critical manner. Some stakeholders are more analytical, somemore descriptive - they should be allowed to complement one another intheir participation.

3. Interdisciplinary Team

• Orientation is critical for building the team dynamic necessary for effectivefield work. Within a team, roles should be clearly delineated ahead of time.For example, are staff to be evaluators or resource persons? who translatesand interprets? who leads off in the information gathering session? Thisdynamic requires competent facilitation, even to the point of dedicating stafftime exclusively to that role.

• Evaluation team members should learn as much as possible about each others’strengths and weaknesses, and draw on them accordingly.

• Daily team debriefings and planning are essential to manage the tremendousamount of information that is collected.

• Evaluators should always refer to the conceptual model and accompanyingquestions when debriefing and planning for field work.

4. Qualitative Approach and Techniques

• Accidental/incidental interviews are an important means of getting “backstage” information and a broader context for research findings.

• “Community mapping” is very valuable for collecting both quantitative andqualitative information. Mapping is visual, participatory and evocative.

• Process Evaluation tends to make explicit what is known implicitly. It canallow project issues to percolate to the surface for resolution.

. -- -

1. Advantages

Process Evaluation Methodology addresses the need within RBM for an appropriateevaluation framework in two very important ways. First, it focuses on results andenables the identification of performance indicators for measurement andmonitoring. Second, it allows for iteration and flexibility in project management,including monitoring, and provides a structure to guide it. Accordingly, the aspectsof PEM most applicable to RBM are the use of a conceptual framework centered

Page 24: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

on cause and effect relationships within the project intent, with the participatorynature of the exercise itself, including the tripartite arrangement of the evaluationteam.

RBM is a western cultural construct. Process Evaluation, with a visual conceptualmodel, attempts to place observed results into a context that is understood by theparticipants, both implementor and end-users. In other words, PEM translatesRBM into a field-based model in which the indicators are derived from the livedexperience of the project.

Guided by a visual model of the intent of the project, the methodology canfocus on longer-term outcomes and impacts, even when they are clearly in thefuture. Key critical conditions (to use the current RBM jargon), i.e., risks andassumptions, can be easily kept in mind and addressed with each result or lackthereof.

Designing a conceptual model that reflects the project’s intended results, anddesigning it in consultation with the project implementors, while a lengthy process,is an important one. NHDP’s spiral model of capacity building enabled varyingpoints of view on the same subject matter to be analyzed with a commonlyunderstood visual depiction of the project’s intended results. It allowed people torelate their different observations to the capacity building nature of the project.Indicators of results “fell out” from this process in all streams of capacity buildingactivity.

The spiral model and its graphic depiction also served as a “touchstone” toolto describe the capacity building observations and results to others, such as decisionmakers in the Ministry of Health, and project managers at CIDA. It was also avery powerful tool for the staff to identify ways to improve project managementfor results. Because the primary stakeholders were active in the conduct of theprocess evaluation, it was relatively easy for recommendations that flowed fromthe evaluation to receive full support and be quickly implemented.

The authors learned that for PEM to be effective, as a results-based approachto evaluation of capacity building, it must be participatory throughout. It isparticularly important to present and discuss the “results” with all participants-villagers, health professionals, project managers and funding agencies. Indeed, theauthenticity of PEM rests on being able to make the link at all levels betweenaction and reflection. As such, this methodology is best used with projects thatare grounded in participation, philosophically and in practice.

Our experience is that the tripartite arrangement- involving outside evaluators,local counterparts, and project staff can create a dynamic that brings out thekeenest insights on “results” from each of the three groups. Each group approachesquestions from different angles. When the emerging insights are weighed againstthe set of questions, clear indicators of results can be teased out through the skillof the outside evaluators.

Page 25: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

Participating in the design and implementation of the evaluation gave projectimplementors, from community to management level, exposure to evaluation asa relevant and flexible tool for management and quality control. The cycle ofsynthesis and reporting (in the District, in Kathmandu, and in the draft/finalreports), served to clarify the findings and facilitate the rapid implementation ofchanges in the management of the Project. By the time the final document wasissued, most of the recommendations had already been addressed.

The authors believe that Process Evaluation can be used repeatedly throughouta project in a results-based framework. Each time the methodology is applied,either for ongoing monitoring (where the focus is on operational effectiveness) orperiodic evaluation (where the focus is on progress on expected results), theperformance results framework becomes more refined. It is a learning methodologywhich becomes customized to each project context.

2. Challenges

Comprehensive evaluations such as this require knowledge and skills beyond theusual range of an individual evaluator’s talents. This methodology encourages amixture of diverse programmatic and methodological expertise at all stages. Thusthe chosen “external” evaluators must be open to use the methodology, and towork from an interdisciplinary approach.

Process evaluation of a multi-stream project that requires many people can bechaotic. Therefore, orientation of a large number of people to the process isessential for success and must be handled carefully through participatoryworkshops.

The duration of the process evaluation is uncertain as it depends on thereadiness of project staff and beneficiaries to fully participate with the externalevaluators. It can also be somewhat “messy,” depending on the speed ofparticipation. In the case of the NHDP, it lasted 12 months from preparation tothe drafting of the evaluation report. Intensive involvement lasted 3 months. Asthis first application of PEM included the initial design work, subsequentapplications as a monitoring or evaluation tool would likely consume less time.

Process evaluation may cost more than a conventional evaluation, particularlywhen more individuals are involved and more time is used. However, PEM has atwo-fold output – assessment of progress on results and a learning process –which enhances the skills and self reliance of the participants. The conventionalevaluation has the former only. In the case of NHDP evaluation, because costswere roughly comparable to a conventional CIDA end-of-project evaluation, itwas judged to be good value for money.

PEM should not be seen as an all encompassing evaluation methodologysuitable for all occasions. It may need to be supplemented with methods designedto provide information on items such as sources and uses of funds, allocation andcosting of inputs, and management audit. Other methodologies such as surveys

Page 26: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

and review of secondary documentation might also be required to gather additionalquantifiable program results. In health, for example, these results may relate tothe utilization of health services, the retention of health post staff or the rationaluse of drugs. In addition, if a project’s duration is less than three years, it wouldlikely be too short to iteratively develop such a monitoring and evaluationframework.

.

What distinguishes RBM from other management modalities is its iterativeapproach to management for results. Thus, performance measures for the projectare not firmly established a priori. Information generated during implementationwill contribute to baseline performance indicators. Successive PEMs cansystematically capitalize on this intelligence.

In a typical project cycle, the original project management plan includespreliminary performance indicators associated with expected results at three levels– outputs, purpose and goal. Indicators are tentative, have usually been derivedfrom a combination of office and field-based information, and represent the bestguess of what is possible. In CIDA-sponsored projects, the agency recognizes thatanticipated results and their associated indicators will likely need modificationand clarification. Given the above, a “recipe” for using Process Evaluation as afield-based RBM tool for a typical five year CIDA project might include thefollowing steps:

1. During the Inception Mission

The purpose of the Inception Mission is to refine the Project Management Plan.During the Mission, accountability for the project is officially taken over by theimplementing agency with its developing country partners. The key players andstakeholders are present; usually the local staff is in place. A team working sessionis held to clarify the project purpose, and conceptualize the intent of the project.At this juncture, the participants pose the questions to be asked to track progresswithin each major activity. These questions might look like those presented inbox 1, section IV-B. At this early stage in the project, the conceptual model of thedevelopment intent may be clear, but the detail will require filling-in; or thedevelopment intent is not clear and will require a working session to sketch it out.This exercise modifies the “expected results” in the original project document.Indicators for the output level are easier to identify at this point; indicators for theoutcome and impact levels are still vague as the project is not yet fully underway.

2. During the First 18 Months

In the first 18 months of the project, project staff implement the RBM monitoringsystem using the key questions developed during the inception mission. Now that

Page 27: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

the project is in full swing, staff conduct an internal monitoring exercise thathones the questions, and brings clarity to the emerging performance indicators.In this exercise, participants supply quantitative and qualitative information oncurrent activities and constraints, and review the project intent/conceptual model.The outputs of this exercise form the basis of the “six monthly” reports.

3. Within Years 2-3: First Interim Evaluation (Optional)

All the outputs from the monitoring exercises become inputs to the interimevaluation, the first full “process evaluation.” This exercise may not be required inless complex projects or in projects performing as anticipated with a soundmonitoring system in place.7 A first interim evaluation would be especially valuablein instances where there appear to be significant design flaws, where a program isnew and largely experimental, or where the context has significantly altered theimplementation.

This exercise, with a tripartite evaluation team, clarifies, as it did with theNHDP, the indicators of performance under each set of activities. Progress onexpected results is easily documented and reported. The findings are used byproject participants to plan remedial and reinforcing activities, as well as annualplans. Further, indicators of future progress become much clearer than they wereat the outset of the project. These indicators now provide a platform for re-casting the “expected results” in the outcome and impact level of the RBMframework. Subsequent monitoring of the project will now be based on the thesefully defined performance indicators.

4. During Years 3-4: Mid-Term Evaluation (or Second InterimEvaluation)

This evaluation takes place roughly mid-term, or in the case where there has beena first interim evaluation, when there is at least one more year left in the project.This evaluation is strategic in nature. The implementors can still look ahead andmake adjustments in their strategies. In addition, the findings of the evaluationhelp to design follow-on activities, should they be appropriate. A full scale processevaluation would be implemented for this purpose. If the stakeholders are in aposition to plan another project, the raw material will be present for planning,i.e., an identifiable results chain. The next steps will be clear.

5. End of Project Evaluation (or Post Project Impact Evaluation)

This evaluation draws upon the accumulated knowledge of earlier monitoringand evaluations. It is a small summative exercise to gather lessons learned andprovide useful baseline data for any follow on activities.

7. One example is “Process Oriented Monitoring,” a modality similar in philosophy to thePEM, pioneered by a GTZ rural development project in Pakistan (Muller-Glodde, 1991).

Page 28: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

PEM used as a RBM tool enabled evaluators of the NHDP to look behind thevisible outputs of the project to find evidence of capacity-building results at theoutcome and impact levels. It also helped project stakeholders articulate andwork with a conceptual framework that underlies the project – graphically depictedas the spiral model of capacity-building. In addition, it provided all participantswith an experience in design and implementation of an evaluation exercise (a firstfor most) that they could use subsequently for project monitoring.

PEM is flexible enough to be used for short-term project monitoring and forlong-term evaluation. It can inform the project of short-term operational issues –i.e. the conversion of project inputs to outputs, as well as longer-term strategicissues. Put another way, it can be developed for use in measuring progress inrelation to outputs and outcomes, as well as progress on broader performanceindicators of replicability and sustainability.

PEM is more difficult to carry out than a conventional evaluation, and maynot always be suitable to every situation. For example, it may not be suited to aproject with a management structure participatory or open to a learning processas part of evaluation design. From the authors experience, PEM can be a valuablemanagement tool throughout the life of longer term projects organized to “learnby doing” and intent on building human capacity.

Aaker, J. and J. Shumaker, Looking Back and Looking Forward: A Participatory Approachto Development, Little Rock, Arkansas, Heifer Project International, 1997.

ACCC, Results-Based Management for Human Resource Development Projects, Ottawa,Association of Canadian Community Colleges, International Services Bureau, 1996.

CIDA, Results-Based Management in CIDA-Policy Statement, Hull, Quebec, CIDA, PolicyDivision, 1996.

CIDA, An RBM Approach to the Management Plan/Inception Report: Guidelines for Staff,Hull, Quebec, Performance Review Unit, Strategic Planning and Policy Division,Asia Branch, CIDA, 1997.

Davis-Case, D., ed., The Community’s Toolbox: The Idea, Methods and Tools forParticipatory Assessment, Monitoring and Evaluation in Community Forestry, Rome,Food and Agriculture Organization (FAO), 1991.

Eyford, G., Institutional Capacity Building in Academic Settings, Consultant Report to theCanadian International Development Agency, Hull, Quebec, 1991.

Page 29: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

ProcessEvaluation ofa Health Projectin Nepal

S. Robinson et al.

Fals-Borda, O. and M.A. Rahman, Action and Knowledge: Breaking the Monopoly withParticipatory Action Research, New York, Apex Press, 1991.

House, E., “Assumptions Underlying Evaluation Models,” Educational Researcher, VII,1978, p. 4-12.

Justice, J., H. Dixit, D. Harding and P. Cox, Process Evaluation of the Nepal HealthDevelopment Project: Final Report, University of Calgary, International Centre, 1995.

Korten, D., Getting to the 21st Century: Voluntary Action and the Global Agenda., WestHartford, Conn., Kumarian Press, 1990.

Muller-Glodde, U., ed., Where There is no Participation: Insights, Strategies, Case Studies,“Do’s and Don’ts” in Regional Rural Development, Asia, GTZ, 1991.

Patton, M. Q., Utilization-Focused Evaluation, 2nd ed., Beverly Hills, CA, Sage, 1986.

____, Qualitative Evaluation and Research Methods, 2nd ed., Newbury Park, Sage, 1990.

Picciotto, R., “Introduction: Evaluation and Development,” New Directions for Evaluation,67, Fall, 1995, p. 13-23.

Picciotto, R. and R. C. Rist, eds., Evaluating Country Development Policies and Programs:New Approaches for a New Agenda, New Directions for Evaluation, San Francisco,Jossey-Bass, 67, Fall, 1995.

Purdey, A., G. B. Adhikari, S. A. Robinson and P. Cox, “Participatory Development inRural Nepal: Clarifying the Process of Community Empowerment,” Health EducationQuarterly, XXI, 3, 1994, p. 329-343.

Qualman, A. and P. Morgan, “Applying Results-Based Management to CapacityDevelopment,” working document, prepared for the Political and Social PoliciesDivision, Policy Branch, Hull, Quebec, CIDA, 1996.

Robertson, A. and M. Minkler, “New Health Promotion Movement: A CriticalExamination,” Health Education Quarterly, XXI, 3, 1994, p. 295-312.

Robinson, S. A. and P. Cox, “Process Evaluation in Nepal: Tracking Capacity Building inHealth Development,” University of Calgary, International Centre, Technical paperNo. TP95/1, 1995.

Rondinelli, D.A., Strategic and Results-Based Management in CIDA: Reflections on theProcess, An unpublished discussion paper prepared for the Corporate Branch, Hull,Quebec, 1993.

Smith, S.E., D. Willms, and N. Johnson, Nurtured by Knowledge: Learning to DoParticipatory Action Research, New York and Ottawa, Apex Press and IDRC, 1997.

Stake, R., Evaluating the Arts in Education: A Responsive Approach, Columbus, Ohio,Charles E. Merrill, 1975.

Page 30: Process Evaluation: A Field Method for Tracking Those Elusive Development Results 1997

CanadianJournal of

DevelopmentStudies

Weiss, C., Evaluation Research: Methods of Assessing Program Effectiveness, EnglewoodCliff, New Jersey, Prentice-Hall, Inc., 1972.

World Bank, Effective Implementation: Key to Development Impact, Report of the WorldBank’s Portfolio Management Task Force, Washington, D. C., World Bank, 1992.

World Bank, Evaluation Results, Operations Evaluation Department (OEC), Washington,D. C., World Bank, 1993.