8/4/2019 Glossary of Terms - M&E
1/35
1 | P a g e
MONITORING AND EVALUATION SERIES
GLOSSARY OF COMMONLY USED M&ETERMINOLOGIES
By:
Enock Warinda
2011
This is a living document and will be updated periodically as required.
8/4/2019 Glossary of Terms - M&E
2/35
2 | P a g e
Planning, Monitoring and Evaluation definitions
Many ASARECA staff and partners bring a wealth of PM&E experience to bear on the
programs and projects that they are responsible for. Frequent obstacles to effective discussion
of PM&E are the misunderstandings that result from a lack of agreed terminology. Manydonor and implementing organizations have their own specific definitions of the terms
commonly associated with PM&E. To facilitate communication inside ASARECA, the
following section lists some key terms and establishes a common definition
The growth of Monitoring and Evaluation (M&E) units in government, together with an
increased supply of M&E expertise from the private sector and public institutions, calls for a
common language on M&E. M&E is a relatively new practice, which tends to be informed by
varied ideologies and concepts. A danger for stakeholders is that these diverse ideological
and conceptual approaches can exacerbate confusion and misalignment. The standardization
of concepts and approaches in ASARECA is particularly crucial for the enhancement of
service delivery. Please note that this glossary is not to be considered as exhaustive. It should
rather be viewed as an attempt to provide ASARECA members with the same understanding
of key terminology used in M&E. ASARECA is in the process of refining its M&E systems
to improve the performance of its system of governance and the quality of its outputs thus
providing early warning systems and mechanisms to respond speedily to problems as they
arise.
8/4/2019 Glossary of Terms - M&E
3/35
3 | P a g e
Detailed Glossary
Accountability
Obligation to demonstrate that work has been conducted in compliance with agreed rules and standards or to
report fairly and accurately on performance results vis--vis mandated roles and/or plans. In terms of
development, it refers to the obligations of partners to act according to clearly defined responsibilities, roles and
performance expectations, often with respect to the prudent use of resources. For evaluators, it connotes theresponsibility to provide accurate, fair and credible monitoring reports and performance assessments.
It also refers to planning for and the monitoring, evaluation and reporting of performance and compliance
against agreed upon organizational standards and outcomes. It enables ASARECA to answer to all stakeholders
for results and impacts and the use of resources and requires the fullest communication between different
programs, ASARECA core service units and responsibilities.
Action research
Action research is an interactive inquiry process that balances problem solving actions implemented in a
collaborative context with data-driven collaborative analysis or research to understand underlying causes
enabling future predictions about personal and organizational change.
Activities
Activities refer to the actions taken to achieve the required outputs and to accomplish the planned objective.
Examples of activities include:
Conducting needs assessment and training on Establishing trials on .. Marketing value-added products of . Negotiations and dialogue with . Monitor/evaluate program results .. Allocate funds to . Provide technical assistance to . Conducting information sessions, etc.
Activity Schedule
An activity schedule refers to the graphic representation that set out the timing, sequence and duration of project
activities. It can also be used to identify milestones for monitoring progress and to assign responsibility for
achievement of milestones.
Advocacy
Refers to the act of representing or defending others (individuals, communities, etc) and using evaluation results
to promote and inform.
Aggregate
To aggregate is to put together (collapse) data from different sectors (such as men, women, households,
communities, management practices, regions, locations, technologies, innovation, etc) into one category. For
example: putting together data from men and women to have household-level data, or collapsing data from
numerous households into community-level data. This requires organization beforehand, at the levels of data
coding, collection, and computer input.
Agricultural Development Domain
A development domain refers to the spatial representation of preconditions or factors considered important for
rural development. It can be characterized using stratification criteria that, based on theory and previous
research, determine the comparative advantage of rural areas with respect to frequently occurring livelihood
strategies. It is constructed by the intersection of three spatial variables: agricultural potential, market access andpopulation density, using a geographic information system (GIS).
8/4/2019 Glossary of Terms - M&E
4/35
4 | P a g e
Agricultural Performance Indicator (API)
Agricultural Performance Indicator refers to the extent or level of contribution of agriculture to an economy or
to the region. It is computed as follows:
Observed level of contribution
Target level of contribution
Observed level of contribution = the contribution that agriculture makes to the economy in aparticular time period, usually one year
Target level of contribution = the maximum expected or planned contribution that agriculture couldmake to the economy given the resource base of the economy.
Analysis of Objectives
Identification and verification of future desired benefits to which the beneficiaries attach priority. The output of
an analysis of objectives is the objective tree.
Analysis of the Strategies
This refers to the critical assessment of the alternative ways of achieving objectives, and selection of one or
more for inclusion in the proposed project.
Analytical methods
Methods used to process and interpret information during an evaluation.
Appraisal
Within the context of ASARECA, appraisal refers to an overall assessment of the relevance, feasibility and
potential sustainability of a development intervention prior to a decision of funding. Its purpose is to enable
decision-makers to decide whether the activity represents an appropriate use of resources.
Archival Records
Involve gleaning of information from existing records that are kept by your own or another institution to gather
data for your evaluation.
Assessment
This refers to a process of making judgment on the basis of the analysis of available information.
Assumptions
Refer to hypotheses about factors or risks which could affect the progress or success of a development
intervention. They are made explicit in theory-based evaluations where evaluation tracks systematically the
anticipated results chain. They represent the 4th column of the Logframe matrix.
Attribution
The ascription of a causal link between observed (or expected to be observed) changes and a specific
intervention. Attribution refers to that which is to be credited for the observed changes or results achieved. It
represents the extent to which observed development effects can be attributed to a specific intervention or to the
performance of one or more partner taking account of other interventions, (anticipated or unanticipated)
confounding factors, or external shocks.
Attrition
A situation when some members of the treatment or control group, or both (e.g. farmers and cluster groups) drop
out from the sample. It also refers to failure to collect data from a unit in subsequent rounds of a panel data
survey. Attrition in the treatment group is generally higher the less desirable the intervention
8/4/2019 Glossary of Terms - M&E
5/35
5 | P a g e
Audit
Refers to an independent, objective assurance activity designed to add value and improve an organizations
operations. It helps an organization accomplish its objectives by bringing a systematic, disciplined approach to
assess and improve the effectiveness of risk management, control and governance processes.
Note: It is worth noting a distinction between regularity (financial) auditing, which focuses oncompliance with applicable statutes and regulations; and performance auditing, which is concerned
with relevance, economy, efficiency and effectiveness. Internal auditing provides an assessment of
internal controls undertaken by a unit reporting to management while external auditing is conducted by
an independent organization.
Baseline Data
A set of data that measures specific conditions (almost always the indicators we have chosen through the design
process) before a project, initiative or program starts or shortly after implementation begins. It provides a
starting point to compare project performance over the life of the project. Example: If you are on a diet, your
baseline is your weight on the day you begin, or the level of income at the start of a project. If reliable historical
data on your performance indicator exists, then it should be used; otherwise, you will have to collect a set of
baseline data at the first opportunity.
Baseline Study
Refers to an analysis describing the situation prior to a development intervention, against which progress can be
assessed or comparisons made. The following questions should be considered when planning a baseline study:
What information is already available? What will the study measure? Which data will effectively measure the indicators? Which methodology should be used to measure progress and results achieved against the project
objectives?
What logistical preparations are needed for collecting, analyzing, storing and sharing data? How will the data be analyzed? Who should be involved in conducting the studies? Does the team have all the skills needed to conduct the study? If not, how will additional expertise be
obtained?
What will the financial and management costs of the study be? Are the estimated costs of the studies proportionate to the overall project costs? Are adequate quality control procedures in place? How will the study results/recommendations be used?
Point to remember:
In case an end-line study is planned, then both the baseline and end-line studies should use the same
methods of sampling, data collection and analysis, and collect the same data (set of indicators) forcomparison.
Benchmark
Benchmark refers to a reference point or standard against which performance or achievements can be assessed.
A benchmark refers to the performance that has been achieved in the recent past by other comparable
organizations, or what can be reasonably inferred to have been achieved in the circumstances.
Beneficiaries
The individuals, groups or organizations who, in their own view and whether targeted or not, benefit directly or
indirectly from the interventions of ASARECA.
Bias
8/4/2019 Glossary of Terms - M&E
6/35
6 | P a g e
The extent to which the estimate of impact differs from the true value as a result of problems in the evaluation or
sample design, but not due to sampling error. However, bias in sampling, for example, means ignoring or under-
representing parts of the target population.
Capacity-buildingThe process through which capacity is created. This is an increasingly important cross cutting issue in poverty
reduction interventions.
Case Study
A methodological approach to describing a situation, individual, etc that typically incorporates a number of data
gathering activities (e.g. interviews, observations, questionnaire, etc) at select sites or programs.
Causality Analysis
Refers to an analysis used in program formulation to identify the root causes of development challenges. It
organizes the main data, trends and findings into relationships of cause and effect, and identifies root causes and
their linkages as well as the differentiated impact of the selected development challenges. A causality
frameworkor causality tree analysis (orproblem tree) can be used as a tool to cluster contributing causes and
examine the linkages among them and their various determinants.
Coherence
Compliance with the policies, guidelines, priorities, and approaches set by an institution.
Community
A group of people living in the same locality and sharing common characteristics.
Community of Practice
Refers to networks of people who work on similar processes or in similar disciplines and who come together to
develop and share their knowledge in that field for the benefit of both themselves and their organization. It may
be created formally or informally, and members can interact online or in person.
Control
A verification that financial documents are exact and expenditures conform to norms and to authorization
procedures ( financial control); or a management function to determine if materials conform to technical
specifications and to international norms (technical control).
Comparison Group
Refers to individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of your
program participants. These individuals may not receive any services, or they may receive a different set of
services, activities, or products. In no instance do they receive the same service(s) as those you are evaluating.As part of the evaluation process, the experimental (or treatment) group and the control/comparison group are
assessed to determine which type of services, activities, or products provided by your program produced the
expected changes.
Control Group
A group of individuals whose characteristics (such as race/ethnicity, gender, and age) are similar to those of
your program participants, but do not receive the program (services, products, or activities) you are evaluating.
Participants are randomly assigned to either the treatment (or program) group or the control group. A control
group is used to assess the effect of your program on participants as compared to similar individuals not
receiving the services, products, or activities you are evaluating. The same information is collected for people in
the control group as in the experimental group.
8/4/2019 Glossary of Terms - M&E
7/35
7 | P a g e
Correlation
The strength of relationship between two (or more) variables. It can take two trends:
Positive correlation one variable tends to increase together with another variable Negative correlation one variable decreases as the other one increases
Cost-Benefit analysisA form of economic analysis that takes into account the benefits and costs in commensurable and actual
monetary values and arrives at single index to determine the value of a project. The financial cost-benefit
analysis is made from the perspective of the project; an economic cost-benefit analysis is made from the
perspective of the entire economy of which the aid activity is part; a social cost benefit analysis also includes
distributional considerations.
Cost-Effectiveness Analysis
An economic or social cost-benefit analysis that quantifies benefits without translating them into monetary
terms. This analysis allows one to compare alternative ways to accomplish a same objective(s). It also allows the
selection of the activity - among those feasible - that will allow the attainment of the objective at the least cost.
Counterfactual
It refers to an estimate of what the outcome (Y) would have been for a program or project participant in the
absence of the program (P). In other words, it is the situation or condition which hypothetically may prevail for
individuals, organizations or groups were there no development intervention. By definition, the counterfactual
cannot be observed. Therefore, it must be estimated using comparison groups.
Critical assumption
It refers to the hypothesis about factors or risks which could affect the progress or success of a development
intervention. It is an important factor outside management control that can strongly influence the project
implementation and success.
Note: Assumptions can also be understood as hypothesized conditions that bear on the validity of the evaluation
itself, e.g. about the characteristics of the population when designing a sampling procedure for a survey.Assumptions are made explicit in theory-based evaluations where evaluation tracks systematically the anticipated
results chain.
Data
Describes information stored in numerical form, either in hard or soft format.
Hard data is precise, numerical information Soft data is less precise, verbal information Raw data is the survey information before it has been processed and analyzed Missing data are values or responses which fieldworkers were unable to collect (or which were lost
before analyses)
Gender-segregated data are information used to promote gender balanced analyses.Data Collection Method
Refers to the strategy and approach used to collect data. The methods include: informal and formal surveys;
direct and participatory observation; community interviews; focus groups; expert opinion; case studies;
literature search, etc. In collecting data, the following questions should be addressed:
What type of data should we collect? When should we collect data (how often)? What methods and tools will we use to collect data? Where do we get the data from? How do we ensure good quality data?
What type of data should we collect? There are two main types of data qualitative and quantitative and the
type of data most appropriate for a project will depend on the indicators developed.
8/4/2019 Glossary of Terms - M&E
8/35
8/4/2019 Glossary of Terms - M&E
9/35
9 | P a g e
maximum of 90 minutes. Originally used as a market research tool to investigate the appeal of various
products, the focus group technique has been adapted as a tool for data collection in many other sectors.
FGDs are useful in answering the same type of questions as those posed in in-depth interviews but within a
social, rather than individual, setting. Specific applications of the focus group method in evaluations
include:
identifying and defining achievements and constraints in project implementation
identifying project strengths, weaknesses and opportunities assisting with interpretation of quantitative findings obtaining perceptions of project effects providing recommendations for similar future interventions generating new ideas
e) Document studies: Reviews of various documents that are not prepared for the purpose of the evaluationcan provide insights into a setting and/or group of people that cannot be observed or noted in any other way.
For example, external public records include census and vital statistics reports, county office records,
newspaper archives and local business records that can assist an evaluator in gathering information about
the larger community and relevant trends. These may be helpful in understanding the characteristics of the
project participants to make comparisons between communities. Examples of internal public records are
organizational accounts, institutional mission statements, annual reports, budgets and policy manuals. They
can help the evaluator understand the institutions resources, values, processes, priorities and concerns.
Personal documents are first person accounts of events and experiences such as diaries, field notes,
portfolios, photographs, artwork, schedules, scrapbooks, poetry, letters to the paper and quotes. They can
help the evaluator understand an individuals perspective with regard to the project. Document studies are
inexpensive, quick and unobtrusive. However, accuracy, authenticity and access always need to be
considered.
Data Quality
Extent to which data adheres to the key dimensions of quality, namely:
Accuracy Reliability Completeness Precision Timeliness Integrity refers to the security or protection of information from unauthorized access or revision Utility refers to the usefulness of the information for its intended users Objectivity refers to whether information is accurate, reliable, and unbiased, and whether it is
presented in an accurate, clear and unbiased manner
.
Data Quality Assessment and/or Assurance
Set of internal and external mechanisms and processes to ensure that data meets the key dimensions of quality.
Data Quality Management
Refers to the establishment and deployment of roles, responsibilities, policies, and procedures concerning the
acquisition, maintenance, dissemination, and disposition of data. It allows an organization to see how the data
quality procedures put in place have caused the quality of the data to improve.
Delphi Technique
The Delphi technique enables experts who live in different locations to engage in dialogue and reach consensus
through an iterative process. Experts are asked specific questions; their answers are sent to a central source,
summarized, and fed back to the experts. The experts then comment on the summary. They are free to challenge
particular points of view or to add new perspectives by providing additional information. Because no one knowswho said what, conflict is avoided.
8/4/2019 Glossary of Terms - M&E
10/35
10 | P a g e
Double Difference
The difference in the change in the outcome observed in the treatment group compared to the change observed
in the control group. Equivalently, it is also the change in the difference in the outcome between treatment and
control. Double differencing removes selection bias resulting from time-invariant unobservables. Also called
Difference-in-difference.
Effects
Intended or unintended changes resulting directly or indirectly from a development intervention.
Primary effects: the changes brought about by an assistance effort to accomplish the specific objectiveof the intervention.
Direct effect: The immediate costs and benefits of both the contributions to and the results of a project,without taking into consideration their effect on the economy.
Indirect effects: the costs and benefits, which are unleashed by the contributions to a project and by itsresults.
External effects: the costs and benefits not taken into account in determining the expenditures andfinancial revenue of the aid programme.
Intangible effects: costs and benefits, which are thought to be pertinent but which cannot be measuredand which therefore, cannot be included in the economic analysis. These effects are taken into account
by sociological analyses.
Evaluability Assessment
An evaluability assessment is a brief preliminary study undertaken to determine whether an evaluation would be
useful and feasible. This type of preliminary study helps clarify the goals and objectives of the program or
project, identify data resources available, pinpoint gaps and identify data that need to be developed, and identify
key stakeholders and clarify their information needs. It may also redefine the purpose of the evaluation and the
methods for conducting it. By looking at the intervention as implemented on the ground and the implications for
the timing and design of the evaluation, an evaluability assessment can save time and help avoid costly
mistakes.
Evaluability assessments are often conducted by a group, including stakeholders, such as implementers,
evaluators, and administrators. To conduct an evaluability assessment, the team:
Reviews materials that define and describe the interventions Identifies modifications to the intervention Interviews managers, Principal Investigators, Scientists, and other staff on their perceptions of the
interventions goals and objectives
Interviews stakeholders on their perceptions of, and level of satisfaction with the interventions goalsand objectives
Develops and refines a theory of change model Identifies sources of data and data collection methods Identifies people and organizations that can implement any possible recommendations from the
evaluation.
Evaluation
Evaluation is a systematic and objective examination of a planned, ongoing or completed project or initiative at
a given point in time. It commonly seeks to determine the efficiency, effectiveness, impact, sustainability and
relevance of a project or organizations objectives. It requires an in-depth review at specific points in the life of
the project usually mid-point or end of a project. It verifies whether project objectives have been achieved or
not. It is a management tool which can assist in evidence-based decision making, and which provides valuable
lessons for implementing organizations and their partners.
Evaluation helps to answer questions such as:
8/4/2019 Glossary of Terms - M&E
11/35
11 | P a g e
How relevant was our work in relation to the primary stakeholders and beneficiaries? To what extent were the project objectives achieved? What contributed to and/or hindered these achievements? Were the available resources (human, financial) utilized as planned and used in an effective way? What are the key results, including intended and unintended results?
What evidence is there that the project has changed the lives of individuals and communities? How has the project helped to strengthen the management and institutional capacity of the
organization?
What is the potential for sustainability, expansion and replication of similar interventions? What are the lessons learned from the intervention? How should those lessons be utilized in future planning and decision making?
There are several approaches to development evaluation(see figure below)
Evaluation Criteria
When evaluating development programs and projects, it is useful to consider the following criteria:
1. EffectivenessThe extent to which an organization, policy, program or initiative is meeting its expected results. It is the
degree of achievement of the planned specific objectives and thus the extent to which the beneficiaries have
reaped the planned benefits. In evaluating the effectiveness of a program or project, it is useful to considerthe following questions:
To what extent were the planned resources used to meet project objectives? What were the major factors influencing the achievement or non-achievement of the objectives? What was the rate of disbursement of project resources? To what extent did the interventions address the capacity needs identified? What was the quality of capacity built? To what extent were the capacity building skills acquired utilized?
Related terms: Cost effectiveness the extent to which an organization, policy, program or initiative is
using the appropriate and efficient means in achieving its expected results relative to alternative design and
delivery approaches.
2. EfficiencyEfficiency measures the outputs qualitative and quantitative in relation to the inputs. It is an economic
term which is used to assess the extent to which aid uses the least costly resources possible in order to
achieve the desired results. This generally requires comparing alternative approaches to achieving the same
outputs, to see whether the most efficient approaches to achieving the same outputs, to see whether the most
efficient process has been adopted. When evaluating the efficiency of a project or program, it is useful to
consider the following questions:
Were the activities cost-effective? Were objectives achieved on time? Was the project or program implemented in the most efficient was compared to alternatives?
3. Relevance
8/4/2019 Glossary of Terms - M&E
12/35
12 | P a g e
Relevance refers to the extent to which the supported interventions are suited to the priorities and policies
of the target group, recipient and donors. In evaluating the relevance of a project or program, it is useful to
consider the following questions:
To what extent are the objectives of the program or project still valid? Are the activities and outputs of the program or project consistent with the overall goal and the
attainment of its objectives? Are the activities and outputs of the program or project consistent with the intended impacts andeffects?
4. ImpactsThe positive or negative changes produced by a development intervention, directly or indirectly, intended or
unintended. This involves the main impacts and effects resulting from the activity on the local social,
economic, environmental and other development indicators. The examinations should be concerned with
both intended and unintended results and must also include the positive and negative impact of external
factors, such as changes in terms of trade and financial conditions.
When evaluating the impact of a program or project, it is useful to consider the following:
What happened as a result of the program or project? What real difference has the activity made to the beneficiaries? How many people have been affected?
5. SustainabilitySustainability is concerned with measuring whether the benefits and effects generated by a project or
program will continue after the donor funding has been withdrawn and the projects terminated. Projects
need to be environmentally as well as financially sustainable.
When evaluating the sustainability of a program or project, it is useful to consider the following questions:
To what extent did the benefits of a program or project continue after donor funding ceased? What were the major factors which influenced the achievement or non-achievement of
sustainability of the program or project?
Evaluation Framework Study
Assessment conducted at the starting up of the project to verify the conditions for allowing monitoring and
evaluation of the project. It includes the revision of data availability and possible collection of baseline data; the
final selection of the indicators; the agreement on the targets to be achieved and their measurement on the basis
of selected indicators; the selection/provision of tools for data collection according to the selected / available
sources of verification.
Evaluative Research
Evaluation research is as a type of study that uses standard social research methods for evaluative purposes, as a
specific research methodology, and as an assessment process that employs special techniques unique to the
evaluation of social programs.
Ex-ante Evaluation (Prospective Evalaution)
A prospective evaluation is conducted ex ante that is, a prposed program is reviewed before it begins, in an
attempt to analyze its likely success, predict its cost, and analyze alternative proposals and projections. Most
prospective evaluations involve the following kinds of activities:
A contextual analysis of the proposed program or policy
A review of evaluation studies on similar programs or policies and synthesis of the findings andlessons from the past
8/4/2019 Glossary of Terms - M&E
13/35
13 | P a g e
A prediction of likely success or failure, given a future context that is not too different from thepast, and suggestions on strengthening the proposed program and policy if decision makers want to
go forward.
Expected ResultsAn outcome that a program, policy or initiative is designed to produce.
Ex-post Evaluation (Analysis)
Evaluation of a development intervention after it has been completed. It may be undertaken directly after or long
after completion. The intention is to identify the factors of success or failure, to assess the sustainability of
results and impacts, and to draw conclusions that may inform other interventions.
This is the evaluation produced after the project is completed, which includes not only the summative evaluation
of the project itself (typically in terms of processes and outputs) but also an analysis of the project's impact on
its environment and its contribution to wider (economic/societal/education- al/community etc.) goals and
policies. It should also lay down a framework for future action leading, in turn, to the next ex ante study. In
reality, ex post evaluations often take so long to produce (in order to measure long-term impact) that they are
too late to influence future planning.
External Evaluation
The evaluation of a development intervention conducted by entities and/or individuals outside the donor and
implementing organizations.
Note: Externally conducted evaluation is not necessarily independent evaluation. If the evaluation is conducted
externally but is funded by and under the general oversight of Program Managers and Principal Investigators, it
is an internal evaluation and should not be deemed independent.
Feasibility study
It is an assessment conducted during the appraisal phase to verify whether the proposed project is well founded,
and is likely to meet the needs of its intended target groups/beneficiaries. It should take into account all policy,
technical, economic, financial, institutional, management, environmental, socio-cultural, gender-related aspects.
Feedback
The transmission of findings generated through the evaluation process to parties for whom it is relevant and
useful so as to facilitate learning. This may involve the collection and dissemination of findings, conclusions,
recommendations and lessons from experience.
Formative Evaluation
Refers to an evaluation intended to improve performance, most often conducted during the implementationphase of projects or programs. It can also be conducted for other reasons such as compliance, legal requirements
or as part of a larger evaluation initiative. Learning how the program is being implemented, including the
challenges and strong points, can serve as useful information for improving practice, rethinking how to go about
things, and identifying future action steps.
It includes several evaluation types, e.g.:
Needs assessment determines who needs the program, how great the need is, and what might work tomeet the need
Evaluability assessment determines whether an evaluation is feasible and how stakeholders can helpshape its usefulness
Structured conceptualization helps stakeholders define the program or technology, the targetpopulation, and the possible outcomes
8/4/2019 Glossary of Terms - M&E
14/35
14 | P a g e
Implementation evaluation monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology, including
alternative delivery procedures
Gender
Gender refers to the social roles assigned to men and women based on their sex.
Gender Analysis
Assessment of the likely differences in the impacts of proposed policies, programmes or projects on women and
men. It includes attention to: the different roles; the differential access to and use of resources and their specific
needs, interests and problems; and the barrier to the full and equitable participation of women and men in
project activities and the equitable distribution of the benefits obtained.
Goal
Refers to the sectoral, national, or organizational objectives to which the project is designed to contribute. It can
also be thought of as describing the expected impact of the project. It is a statement of intention that defines the
main reason for undertaking the project.
Ground-truth
This refers to a test run, a pilot, or a pre-test study. It implies testing a technology or an innovation or any
activity in the setting where it will be used. It also refers to checking ideas or methods in the real world.
Hierarchy of objectives
This is a tool that helps to analyze and communicate programme objectives and shows how local interventions
should contribute to global objectives. It organizes these objectives into different levels (objectives, sub-
objectives) in the form of a hierarchy or tree, thus showing the logical links between the objectives and their
sub-objectives. It presents in a synthetic manner the various intervention logics derived from the regulation, that
link individual actions and measures to the overall goals of the intervention
Horizontal logic
Indicates the relation between the resources and the results of a project or programme through the identification
of objectively verifiable indicators and means of verification for these indicators.
Inception Phase
The period from the project start-up until the finalization of the updating of the work plan, Logframe Matrix and
the evaluation framework study. It extends between one and three months and ends with a first project report.
Independent Evaluation
Refers to an evaluation carried out by entities and persons free of the control of those responsible for the design
and implementation of the development intervention.
Note: The credibility of an evaluation depends in part on how independently it has been carried out.
Independence implies freedom from political influence and organizational pressure. It is characterized
by full access to information and by full autonomy in carrying out investigations and reporting
findings.
Indicator
An indicator is a marker of performance showing progress and helping measure change. It comes from the
Latin words in (towards) and dicare (make known).
Types of Indicators:
8/4/2019 Glossary of Terms - M&E
15/35
15 | P a g e
1. Input indicators: these indicators measure the provision of resources, for example the number of full timestaff working on the project.
2. Process indicators: these indicators provide evidence of whether the project is moving in the rightdirection to achieve the set objectives. They relate to multiple activities that are carried out to achieve
project objectives, e.g.: What has been done? Examples include training outlines, policies/procedures developed, number
of varieties produced.
Who and how many people have been involved? Examples include number of participants,proportion of ethnic groups, age groups, number of partner organizations involved.
How well have things been done? Examples include proportion of participants who report they aresatisfied with the service or information provided, etc.
3. Output indicators: these indicators demonstrate the change at project level as a result of activitiesundertaken. Examples include number of demand driven technologies generated, number of policy options
presented for legislation or decree, etc.
4. Outcome indicators: these indicators illustrate the change with regard to the beneficiaries of the project interms of knowledge, attitudes, skills or behavior. These indicators can usually be monitored after a medium
to long term period. Examples include the number of new varieties users in a community, etc.
5. Impact indicators: these indicators measure the long term effect of a program, often at the national orpopulation level. Examples of impact indicators include: percent change in total factor productivity; percent
change in selected crops, etc. Impact measurement requires rigorous evaluation methods, longitudinal study
and an experimental design involving control groups in order to assess the extent to which any change
observed can be directly attributed to project activities.
Other types of indicators:
1. Proxy indicators: these indicators provide supplementary information where direct measurement isunavailable or impossible to collect.
2. Quantitative and qualitative indicators: all the indicators discussed above can be categorized asqualitative or quantitative indicators on the basis of the way they are expressed. Quantitative indicators are
essentially numerical and are expressed in terms of absolute numbers, percentages, ratios, binary values
(yes/no), etc. Qualitative indicators are narrative descriptions of phenomena measured through peoples
opinions, beliefs and perceptions and the reality of peoples lives in terms of non-quantitative facts.
Qualitative information often provides information which explains the quantitative evidence, e.g. what are
the reasons for low levels of technology adoption? Why do so few men use the introduced varieties? What
are the cultural determinants that contribute to the need for appropriate information packages? Qualitative
information supplements quantitative data with a richness of detail that brings a projects results to life.
It is important to select a limited number of key indicators that will best measure any change in the project
objectives and which will not impose unnecessary data collection. As there is no standard list of indicators, each
project will require a collaborative planning exercise to develop indicators related to each specific objective and
on the basis of the needs, theme and requirements of each project.
Criteria of a strong Performance Indicator
Validity: Doest the performance indicator actually measure the result? Reliability: Is the performance indicator a consistent measure over time? Sensitivity: When the result changes, will the performance indicator be sensitive to those changes? Simplicity: How easy will it be to collect and analyze the data? Does it present challenges is it complex? Does it
need technical expertise to understand?
8/4/2019 Glossary of Terms - M&E
16/35
16 | P a g e
Utility: Will the information be useful for program management (decision-making, learning, and adjustment)? Affordability: Can the program afford to collect the information?
Innovation
It refers to a creative and interactive process of making improvements by successfully introducing something
new into the social and economic practices. It goes far beyond the confines of research labs to users, suppliers
and consumers everywhere in government, business and non-profit organizations, across borders, across
sectors, and across institutions. The Oslo Manual defines four types of innovation: Product innovation; Process
innovation; Marketing innovation; and Organizational innovation.
Product innovation: A good or service that is new or significantly improved. This includes significantimprovements in technical specifications, components and materials, software in the product, user
friendliness or other functional characteristics.
Process innovation: A new or significantly improved production or delivery method. This includessignificant changes in techniques, equipment and/or software.
Marketing innovation: A new marketing method involving significant changes in product design orpackaging, product placement, product promotion or pricing.
Organizational innovation: A new organizational method in business practices, workplaceorganization or external relations.
Innovation Policies (Agricultural)
Refer to policies designed to enhance the stakeholders capacity to innovate in the agricultural sector. They
operate on both the formal and informal sources of innovation. Based on the Innovation Systems Framework,
innovation policies are hereby classified into three categories:
Policies designed to create and strengthen the formal organizations and institutions needed to generateand apply new or existing information
Policies that support and facilitate innovation among system actors, including farmers Policies that integrate and intermediate among public, private, and civil society actors engaged in
innovation processes.
Potential indicators on agricultural innovation policy include:
Expert assessments of policies on agricultural research, education, and extension/advisory services Average distance of farm households to markets Membership in international regimes, e.g. International Union for the Protection of New Varieties of
Plants (UPOV) or the International Treaty on Plant Genetic Resources for Food and Agriculture
(ITPGRFA)
Innovation System
An Innovation System refers to a network of organizations, enterprises, and individuals focused on bringing new
products, new processes, and new forms of organization into social and economic use, together with theinstitutions and policies that affect their behavior and performance. The IS concept embraces not only the
science suppliers, but also the totality and interaction of actors involved in innovation. It gives more attention to
The interaction between research and related economic activities The attitudes and practices that promote interaction and the learning that accompanies it The creation of an enabling environment that encourages interaction and helps to put knowledge into
socially and economically productive use.
Inputs
These are the human, financial, material/physical and information resources used to produce outputs through
activities and accomplish outcomes.
Internal Evaluation
8/4/2019 Glossary of Terms - M&E
17/35
17 | P a g e
Refers to an evaluation of a development intervention conducted by a unit and/or individuals reporting to the
management of the donor, partner, or implementing organization.
Interval scale
Refers to measurements with defined and constant intervals between successive values (e.g. attitude measures
and rankings). In Interval Scale, all the values are continuous.
Intervention Logic
The strategy underlying the project. It is the narrative description of the project at each of the four levels of the
hierarchy of the objectives used in the Logframe.
Joint Evaluation
An evaluation to which different donor agencies and/or partners participate.
Note: There are various degrees of jointness depending on the extent to which individual partners
cooperate in the evaluation process, merge their evaluation resources and combine their evaluation
reporting. Joint evaluations can help overcome attribution problems in assessing the effectiveness of
programs and strategies, the complementarity of efforts supported by different partners, the quality of
aid coordination, etc.
Key informants
People in a community, region, organization, who, because of their position, are able to provide information or
insights on some aspects relevant to the project. These informants play a key role in evaluation, especially in
qualitative evaluation, though it is important to bear in mind that they also provide a subjective/one-sided
perspective. Therefore the evaluators will have to obtain the information from a large number of key informants.
Key performance indicator (KPI)
A variable that allows the verification of changes in the development intervention or shows results relative to
what was planned. Key performance indicators may be selected from overall objectively verifiable indicators,
but should adequately and sufficiently measure the intended change either singly or in combination.
Learning
This is the process by which knowledge and experience directly influence changes in behavior. It also refers to
reflection on experiences to identify how a situation or future actions could be improved. This can be individual
or group-based. Learning involves applying lessons learned to future actions, which provides the basis for
another cycle of learning. Thus, we learn to:
Increase effectiveness and efficiency Increase the ability to initiate and manage change Utilize institutional knowledge and promote organizational learning Improve cohesion among different units of the organization Increase adaptability for opportunities, challenges and unpredictable events Increase motivation, confidence and proactive learning
Lessons Learned
Refer to the conclusions extracted from reviewing a development program or project, or even activities by
participants, managers, beneficiaries or evaluators with implications for effectively addressing similar issues or
problems in another setting. Frequently, lessons highlight strengths or weaknesses in preparation, design, and
implementation that affect performance, outcome, and impact.
Logical Framework
Management tool used to improve the design of interventions, most often at the project level. It involvesidentifying strategic elements (inputs, outputs, outcomes, impact) and their causal relationships, indicators, and
8/4/2019 Glossary of Terms - M&E
18/35
18 | P a g e
the assumptions or risks that may influence success and failure. It thus facilitates planning, execution and
evaluation of a development intervention. The following is the layout of the Logical Framework Matrix:
Intervention logic OVIs of achievement Sources and means of
verification
Assumptions
Impact 13: What is the overall
impact of the project?
14: What are the key
indicators related to theimpact?
15: What are the sources
of information for theseindicators?
Outcomes 9: What specific
outcome is the action
intended to achieve?
10: Which indicators clearly
show that the outcome has
been achieved?
11: What are the sources
of information that exist
or can be collected?
What are the methods
required to get this
information?
12: Which risks should be
taken into consideration?
Outputs 5: Outputs are the
results envisaged to
achieve the specific
objective.
6: Enumerate the outputs.
What are the indicators to
measure whether and to what
extent the action achieves the
expected outputs?
7: What are the sources
of information for these
indicators?
8: What external conditions
must be met to obtain the
expected outputs on
schedule?
Activities 1: What are the key
activities to be carried
out and in what
sequence in order to
produce the expected
results? (group the
activities by result)
2: Means: What are the
means required to implement
these activities, e.g. personnel,
equipment, training, studies,
supplies, operational facilities,
etc.
3: What are the sources
of information about
action progress?
4: What pre-conditions are
required before the action
starts? What conditions
outside the beneficiary's
direct control have to be met
for the implementation of
the planned activities?
Logical Framework Approach (LFA)
A methodology for planning, managing and evaluating programmes and projects involving stakeholder analysis,
problem analysis, analysis of objectives, analysis of strategies, preparation of the Logframe Matrix and activitiesand resources schedule.
Management Information System (MIS)
The creation, through a well designed monitoring system, of a regular feedback to the management at the project
and central level on all key aspects of a project.
Means of Verification
Refers to the expected source of the information we need to collect. MoVs should clearly specify this source.
They ensure that the indicators can be measured effectively by specification of types of data, sources of
information and methods of collection.
Meta Evaluation
This is an evaluation designed to aggregate findings from a series of evaluations. It can also be used to denote
the evaluation of an evaluation to judge its quality and/or assess the performance of the evaluators.
Mid-Term Review/Evaluation
This is the point at which progress-to-date is formally measured to see whether the original environment has
changed in a way which impacts on the relevance of the original objectives. It is an opportunity to review these
objectives if necessary, decide whether the project is on target in terms of its projected outputs, adjust the
working practices if necessary or, in certain circumstances, re-negotiate timescales or outputs. It is often not
carried out at the 'mid- point' at all but at the end of a significant phase!
Milestones
8/4/2019 Glossary of Terms - M&E
19/35
19 | P a g e
They correspond to the process indicators. They are an indication of short and medium-term objectives (usually
activities) which facilitate the measurement of achievements throughout the project rather than just at the end.
They also indicate times when decisions should be taken or an action should be finished.
Monitoring
This an ongoing and systematic collection and analysis of information to assist timely decision making, ensureaccountability and provide part of the data for evaluation and learning. It provides project and program
managers with important information on progress, or lack of progress, in relation to project/program objectives.
It helps to answer the following questions:
How well are we doing? Are we doing the activities we planned to do? Are we following the designated timeline? Are we over/under-spending? What are the strengths and weaknesses in the project?
Monitoring and Evaluation (M&E) Framework
Refers to a holistic approach that can address the program needs, monitor program processes and outputs, and
evaluate goals and program/project objectives. It encompasses the program planning processes right down to the
documentation and dissemination plan.
Monitoring and Evaluation (M&E) Plan
A comprehensive narrative document on all M&E activities (summary of M&E Framework). It addresses key
M&E questions; what indicators to measure; sources, frequency and method of indicator data collection;
baselines, targets and assumptions; how to analyze or interpret data; frequency and method for report
development and distribution of the indicators, and how the components of the M&E System will function.
Monitoring and Evaluation (M&E) System
M&E system is a framework of M&E principles, practices and standards to be used throughout ASARECA. It is
also envisaged to function as an apex-level information system which draws from program and project systems
to deliver useful M&E products for stakeholders.
Most Significant Change (MSC)
This is a system that designed to record and analyze change in projects or programs where it is not possible to
precisely predict changes beforehand, and is therefore difficult to set predefined indicators. It is also designed to
ensure that the process of analyzing and recording change is as participatory as possible. It aims to identify
significant changes brought about by a development intervention, especially in those areas where changes are
qualitative and therefore not susceptible to statistical treatment. It relies on people at all stages of a project or
program meeting to identify what they consider to be the most significant changes within predefined areas or
domains.
Its strength lies in its ability to produce information-rich stories that can be analyzed for lesson learning. It also
involves a transparent process for the generation of stories that shows why and how each story was chosen. It is
designed around purposive sampling sampling to find the most interesting or revealing stories.
NARS
The National Agricultural Research Systems (NARS) comprise mainly of national institutes of agricultural
research, universities, training and extension services, users of agricultural products and civil society
organizations (NGO, Producer Organizations, and Private Sector). This system is important in the promotion of
the new paradigm of integrated agricultural research for development (IAR4D). In general, the implementation
of the technical programmes (Agro-biodiversity, Livestock and Fisheries, Staple crops, High Value Non-staple
crops, Natural Resources Management and Biodiversity, etc) must be realized through networking betweenmembers of NARS of ASARECA. Research activities carried out within this framework are mainly funded on a
8/4/2019 Glossary of Terms - M&E
20/35
20 | P a g e
competitive basis. However, commissioned research could be carried out, as the case may be, by specialized
centers in the sub-region. The knowledge management and upscaling programme is carried out through
networking at the level of NARS and through competitive funds and commissioned research, as the case may
be.
Nominal ScaleRefers to classifications that form no natural sequence. However, numbers sometimes are assigned to
characteristics for identification, but have no mathematical value and cannot be used for mathematical functions.
Objective
A specific statement detailing the desired accomplishments of a project. It is specified in terms of desired
changes in behaviors and practices as a result of training or services provided by a project. Examples: reduction
of malnutrition, increase in income, improvement in the environment.
Objective Tree
A diagrammatic representation of the situation in the future once problems have been remedied, following a
problem analysis and showing a means to ends relationship.
Objectively Verifiable Indicators (OVI)
Indicators of the different level of objectives, they represent the second column of the logical framework. OVIs
provide the basis for designing an appropriate monitoring system.
Ordinal Scale
Refers to measurements using classifications with a natural sequence (lowest to highest), but with undefined
intervals. The values are discontinuous.
Outcome Mapping
Is a methodology for planning and assessing development programming that is oriented towards change and
social transformation. It provides a set of tools to design and gather information on the outcomes, defined asbehavioral changes, of the change process. It helps a project or program learn about its influence on the
progression of change in their direct partners, and therefore helps those in the assessment process think moresystematically and pragmatically about what they are doing and to adaptively manage variations in strategies to
bring about the desired outcomes. It puts people and learning at the centre of development and accepts
unanticipated changes as potential for innovation.
It consists of three phases:
Intentional design: The program or project frames its activities based on the changes it intends to helpbring about and that its actions are purposely chosen so as to maximize the effectiveness of its
contributions to development.
o Vision Statement: describes why the program or project is engaged in development and provides aninspirational focus. In drafting the vision, project implementers must be visionary, by establishing a vivid
beacon to motivate staff and highlight the ultimate purpose of their day-to-day work.
o Mission Statement: describes how the program or project intends to support the vision, by stating the areasin which the program or project will work toward the vision. It does not list all the planned activities. In
developing the mission statement, the project implementers should consider not only how the program will
support the achievement of outcomes by its boundary partners, but also how it will keep itself effective,
efficient, and relevant.
o Boundary Partners refer to individuals, groups, or organizations with whom the project or program interactsdirectly and with whom the program can anticipate some opportunities for influence (e.g. NGOs, indigenous
groups, churches, community leaders, regional administration, private sector, academic and research
institutions, international institutions, etc). They are assumed to control change since they operate within
different logic and responsibility systems.
8/4/2019 Glossary of Terms - M&E
21/35
21 | P a g e
o Outcome Challenge: Once the boundary partners have been identified, an outcome challenge statement isdevelopment for each of them. Outcomes are the effects of the program being there, with a focus on how
the behavior, relationships, activities, or actions of an individual, group, or institution will change if the
program or project is extremely successful. They are phrased in a way that emphasizes behavioral change.
o Progress Markers: A graduated set of statements describing a progression of changed behaviors in theboundary partner that will lead to the outcome challenge (e.g. what are the changes you expect to see, like tosee, love to see)
o Strategy Maps: A combination of strategies or activities aimed at the boundary partner (outputs, new skills,support needs), and the environment of the partner (rules of the game, information availability, networking,
etc).
o Organizational Practices: Refers to practices that determine an organizations effectiveness, that fostercreativity and innovation, assist partners and maintain the organizations niche.
Outcome and performance monitoring: It provides a framework for the ongoing monitoring of theprograms or projects actions in support of the outcomes and the boundary partners progress towards
the achievement of outcomes. It is based largely on systematized self-assessment.o Monitoring Priorities: Refers to what (information), who (will collect, use it), when (should it be collected),
how (will it be collected, used) etc. This information is then collected by the following 3 tools:
o Outcome Journals: Data collection tools for monitoring the progress of a boundary partner in achievingprogress markers over time. it describes the level of change as low, medium, or high, and a place to record
who among the boundary partners exhibited the change. It includes information explaining the reasons for the
change, the people and circumstances that contributed to the change, evidence of the change, a record of
unanticipated change, and lessons for the program or project is also recorded in order to keep a running track
of the context for future analysis or evaluation.
o Strategy Journals: Data collection tools for monitoring the strategies a program uses to encourage change inthe boundary partner. Some of the planning and management questions that project implementers might want
to consider during monitoring meetings after completing the strategy journal include: What are we doing well
and what should we continue doing? What are we doing okay or badly and what can we improve? What
strategies or practices do we need to add? What strategies or practices do we need to give up (those that have
produced no results, or require too much effort or too many resources relative to the results obtained)? How
are/should we be responding to the changes in boundary partners behavior? Who is responsible? What are
the timelines? Has any issue come up that we need to evaluate in greater depth? What? When? Why? How?
o Performance Journal: Data collection tools for monitoring how well the program or project is carrying outits organizational practices. It records data on how the program is operating as an organization fulfills its
mission. A single performance journal is created for the program and filled out during the regular monitoring
meetings.
Evaluation planning: It helps the program or project to identify evaluation priorities and develop anevaluation plan.
o Evaluation Plan: A short description of the main elements of an evaluation study to be conducted. Itidentifies who will use the evaluation, how and when, what questions should be answered, what information
is needed, who will collect this information, how and when, and how much it will cost.
Outcomes
Outcomes are the changes within the community or among the researchers that can be attributed, at least in
part, to the research process. Outcomes result both from meeting research objectives (outputs) and from the
participatory research process itself. They can be negative or positive, expected or unexpected, and encompass
both the functional effects of participatory research (e.g. greater adoption and diffusion of new technologies,changed farming practices, changes in institutions or management regimes) and the empowering effects (e.g.
8/4/2019 Glossary of Terms - M&E
22/35
22 | P a g e
increased community capacity, improved confidence or self-esteem, and improved ability to resolve conflict or
solve problems). The desired outcomes of participatory research in natural resource management projects, for
example, generally involve social transformation; many are diffuse, long term, and notoriously difficult to
measure or to attribute to a particular research project or activity.
Three types of outcomes related to the logic model are defined as:
1. Immediate Outcome: an outcome that is directly attributable to a policy, program or initiative'soutputs. In terms of time frame and level, these are short-term outcomes and are often at the level of an
increase in awareness of a target population. Examples include: increase in awareness/skills of ,
access to .,
2. Intermediate Outcome: an outcome that is expected to logically occur once one or more immediateoutcomes have been achieved. In terms of time frame and level, these are medium term outcomes and
are often at the change of behavior level among a target population. Examples include: increased
adoption of crop varieties in Country X , increased area under technology or management practice in
., etc.
3. Final Outcome: the highest-level outcome that can be reasonably attributed to a policy, program orinitiative in causal manner, and is the consequence of one or more intermediate outcomes having been
achieved. These outcomes usually represent the raison d'tre of a policy, program or initiative. They are
long-term outcomes that represent a change of state of a target population. Ultimate outcomes of
individual programs, policies or initiatives contribute to the higher-level departmental Strategic
Outcomes.
Outputs
Direct products and services stemming from the activities of an organization, policy, program or initiative, and
usually within the control of the organization itself. These products and services are delivered to the project
participants, thus helping to achieve intermediate changes that result from accessing and using inputs.
Examples of outputs are:
The research activities undertaken as well as the tangible products of the research Information, such as a profile of a community, documentation of indigenous knowledge of plant
species or local management practices, etc (organized in a report, for example)
Products, such as new techniques or technologies developed through farmer experimentation, newmanagement regimes for common resources, new community institutions and organizations, or
community development plans.
Measures such as the number of people trained, the number of farmers involved in on-farmexperiments, pamphlets produced, research studies conducted and the number of reports or publications
of the research
Evaluators will assess the quality of the outputs (e.g. what was the nature of the activities? Were all those
interested in the project able to participate? Are the outputs useful? For whom? etc).
Overall Objective
It explains why the project is important to the society in terms of long-term benefits to final beneficiaries as well
as wider benefits to other groups. It may also help to show how a programme fits into the regional/sectoral
policies of the government/organization concerned and of the donor community.
Participatory Approach
It refers to the involvement of project participants in the design, monitoring and evaluation of a project. It isparticularly suitable for process projects, but requires specific skills to be implemented and is more time
8/4/2019 Glossary of Terms - M&E
23/35
23 | P a g e
consuming than other approaches. On the other hand, the use of participatory approach increases beneficiaries
ownership and therefore potential sustainability of project results.
Participatory Evaluation
Refers to the evaluation method in which representatives of agencies and stakeholders (including beneficiaries)
work together in designing, carrying out and interpreting an evaluation.
Participatory Process
One or more processes in which the key stakeholders take part in specific decision-making and action, and over
which they may exercise specific controls. It is often used to refer specifically to processes in which primary
stakeholders take an active part in planning and decision-making, implementation, learning and evaluation. This
often has the intention of sharing control over the resources generated and responsibility for their future use.
Partners
These are individuals and/or organizations with whom/which ASARECA works cooperatively to achieve
mutually agreed upon objectives and outputs and outcomes, and to secure stakeholder participation. Partners
include: universities, community-based organizations, Farmer Organizations, private sector, CG Centres,
governments, civil society, non-governmental organizations, universities, professional and business
associations, multilateral organizations, private companies, etc.
Performance
The degree to which a development intervention or a development partner operates according to specific
criteria/standards/guidelines or achieves results in accordance with stated goals or plans.
Performance Indicator
Performance indicator refers to a particular characteristic or dimension used to measure intended changes
defined by an organizations Logframe or results framework. They are used to observe progress and to measure
actual results compared to expected results. Performance indicators help to answer whether a project is
progressing toward its objective, rather than why/why not such progress is being made. They are usually
expressed in quantifiable terms, and should be objective and measurable (numeric values, percentages, scores
and indices). Quantitative indicators are preferred in most cases, although in certain circumstances qualitative
indicators are appropriate.
Performance Measurement
A system for assessing performance of development interventions against stated goals. It also refers to the
collection, interpretation of, and reporting on data for performance indicators which measure how well programs
or projects deliver outputs and contribute to achievement of goals.
Performance monitoringPerformance monitoring refers to the continuous process of collecting and analyzing data to measure the
performance of a program, project, process or activity against expected results. A defined set of indicators is
constructed to regularly track the key aspects of performance. Performance reflects effectiveness in converting
inputs into outputs, outcomes and impacts.
Performance Monitoring Framework (PMF)
A plan to systematically collect relevant data over the lifetime of an investment to assess and demonstrate
progress made in achieving expected results. It documents the major elements of the monitoring system and
ensures that performance information is collected on a regular basis. It contains information on expected results,
indicators, baseline data, targets, data sources, data collection methods, frequency, and the responsibility for
data collection. For example:
8/4/2019 Glossary of Terms - M&E
24/35
24 | P a g e
Data Sources: Individuals, Organizations, or Publications from which data about your performanceindicator will be obtained. Therefore, identify the data sources for each performance indicator that has
been selected. Focus also on existing sources to maximize value from existing data {beneficiaries,
partner organizations, government documents, tracking sheets, partner statistical reports, consultants,
ASARECA staff}
Data Collection Methods: Represent HOW data about performance indicators is collected {e.g.analysis of records or documents, literature review, survey, interviews, focus group, questionnaire, pre-
and post-intervention survey, comparative study, collection of anecdotal evidence, observing
participants, etc}.
Frequency: How often will the information about each performance indicator be collected? Someindicators may be looked at regularly as part of ongoing performance management, while others will
only be collected periodically for baseline, mid-term, or final evaluations.
Responsibility: Who is responsible for collecting and validating the data? {e.g. beneficiaries, localprofessionals, partner organizations, consultants, ASARECA staff, etc}
Performance Monitoring Plan (PMP)
The PMP is a detailed plan for managing the collection of data in order to monitor performance. It identifies the
indicators to be tracked; specifies the source, method of collection, and schedule of collection for each piece of
datum required; and assigns responsibility for collection to a specific office, team, or individual. It contributes to
the effectiveness of the performance monitoring system by assuring that comparable data will be collected on a
regular and timely basis. It is mainly used for:
Planning to monitor achievement of program implementation Collecting and analyzing performance information to track progress towards planed results and
outcomes
Using performance information to improve management decision making and resource allocation Communicating results achieved, or not attained to advance organizational learning
It clearly spells the desired results, performance indicators, baselines and targets, plans for data collection,
analysis, reporting and utilization.
Performance Monitoring System
It refers to an organized approach or process for systematically monitoring the performance of a program,
project, process or activity toward its objectives over time. Performance monitoring systems at ASARECA
consist of, inter alia: performance indicators; performance baselines; performance targets for all result areas;
means for tracking critical assumptions; performance monitoring plans to assist in managing the data collection
process; and the regular collection of actual results data (see figure for details).
8/4/2019 Glossary of Terms - M&E
25/35
25 | P a g e
Planning
A broad description of the activities that would normally be carried out as part of project development, from
start to finish, and the milestones that would generally be achieved along the way, such as signing sub grant
agreements, capacity building details, etc. The plan should also explain the different aspects that need to beaddressed as part of project development, and illustrate basic principles that are to be followed. The sequence of
and relationship between main activities and milestones should also be described.
Portfolio Review
A required systematic analysis of the progress of an Objective, Output, or Outcome by the M&E Units. It
focuses on both operational and strategic issues and examines the robustness of the underlying development
hypotheses and the impact of activities on results. It is intended to bring together various expertise and points of
view to arrive at a conclusion as to whether the program is on track or if new actions are needed to improve
the chances of achieving results. At a minimum, a portfolio review must examine the following:
a) Progress towards achievements of Objectives, Outputs, or Outcomes, and expectations regarding futureresults achievements.
b) Evidence that outputs of activities are adequately supporting the relevant outcomes and ultimatelycontributing to the achievement of the purpose and goals.
c) Adequacy of inputs for producing activity outputs and efficiency of processes leading to output.d) Status and timeliness of input mobilization effort.e) Status of critical assumptions and causal relationships defined in the results framework, along the
related implications for performance towards outcomes and goals.
f) Status of related partner efforts that contribute to the achievement of results.g) Pipeline levels and future research requirements.
Power
The power is the probability of detecting an impact if one has occurred. The power of a test is equal to 1 minus
the probability of a type II error, ranging from 0 to 1. Popular levels of power are 0.8 and 0.9. High levels of
power are more conservative and decrease the likelihood of type II error. An impact evaluation has high power
if there is a low risk of not detecting real program impacts, i.e. of committing type II error.
Power Calculations
Powercalculations indicate the sample size required for an evaluation to detect a given minimum desired effect.
It depends on parameters such as power (or the likelihood of type II error), significance level, variance, and
intra-cluster correlation of the outcome of interest.
Pre-Conditions
Conditions that have to be met before the project can commence, and typically involves the existence of funds
from the donor agency, or the approval of a specific policy/law by the government.
Pre-Planning
The process of understanding the status, condition, trends and key issues affecting people and community,
ecosystems and institutions in a given geographic context at any level (local, national, regional, international).
Problem Analysis
A structured investigation of the negative aspects of a situation in order to establish cause-effect relationships.
Problem tree
A diagrammatic representation of a negative situation, showing a cause-effect relationship. It is the visual result
of a problem analysis.
8/4/2019 Glossary of Terms - M&E
26/35
26 | P a g e
Process-Based Evaluation
Process-based evaluations are aimed at understanding how a program or project works. It is helpful in obtaining
early warnings of operational difficulties in newly implemented programs or projects. It can also be conducted
at regular intervals to check that operation remains on track and follows established procedures. It seeks to
answer the following key questions: What are the actual steps and activities involved in delivering a good or
service? How close are they to agreed operation? Is program evaluation efficient? It is an evaluation that tries toestablish the level of quality or success of the processes of a program; for example, adequacy of the
administrative processes, accessibility of the program benefits, clarity of the information campaign, internal
dynamics of implementing organizations, their policy instruments, their service delivery mechanisms, their
management practices, and the linkage among these. There are numerous questions that might be asked in a
process-based evaluation, including:
Is the program being implemented according to design? Are operational procedures appropriate to ensure the timely delivery of quality products or services? What is the level of compliance with the Operations Manual? Are there adequate resources (money, equipment, facilities, training, etc) to ensure the timely delivery of quality
products or services?
Are there adequate systems (human resources, financial, management information, etc) in place to supportprogram operations? Are program clients receiving quality products and services?
What is the general process that project beneficiaries go through with the products or projects? are projectbeneficiaries satisfied with the processes and services?
Are there any operational bottlenecks? Is the program or project reaching the intended population? Are program or project reach-out activities adequate to
ensure the desired level of target population participation?
Program Evaluation
Evaluation of a set of interventions, marshaled to attain specific global, regional, country, or sector development
objectives.
Note: A development program is a time-bound intervention involving multiple activities that may cut acrosssectors, themes and/or geographic areas.
Project
An intervention that consists of a set of planned and interrelated activities and tasks designed to achieve defined
objectives within a given budget and a specified period of time. Projects in International Programs are to address
specific needs identified by communities and families.
Project Evaluation
Refers to a technique to review the current status of a project against plan and to provide practical,
comprehensive and forward-looking recommendations for corrective action where necessary. In general terms,
project evaluation should consider: 1) Project objectives in terms of cost, time and quality; 2) Management; 3)Organization; 4) Systems and Procedures; 5) Suitability of contracts; 6) Performance of consultants; 7) Work to-
date in terms of cost, time and quality measured against plan.
Project Goal
Project Goal (Purpose; long-term; development objective) refers to what the project is expected to achieve in
terms of significant improvements in the lives of the target population beyond the life of the project. Examples:
demonstration of community solidarity as expressed through engagement in a number of defined activities,
increased household income and reduced malnutrition in children leading to improved quality of life as
demonstrated by improved nutrition of all members of the household.
Project or Program Objective
The intended physical, financial, institutional, social, environmental, or other development results to which a
project or program is expected to contribute.
8/4/2019 Glossary of Terms - M&E
27/35
27 | P a g e
Project Partner
The organization in the project country with which the Heifer country program collaborates to achieve mutually
agreed upon objectives. The organization works closely with the beneficiary group/community and may handle
financial and operational aspects of the project. Partners may include host country governments, local and
international NGOs, universities, professional and business associations, private businesses, etc.
Project Purpose
It is the central objective of the project and represents what the project is expected to achieve by the end of the
project and with the resources available. By achieving its purpose the project contributes to the overall
objective.
Project Strategy
An overall framework of what a project will achieve and how it will be implemented.
Propensity Score Matching (PSM)
PSM is used to measure a programs effect on project participants relative to non-participants with similar
characteristics. To use this technique, evaluators must first collect baseline data. They must then identify
observable characteristics that are likely to link to the evaluation question {for example, Do farmers living near
the experimental plots have higher yields from their farms than those further away?}. The observable
characteristics may include gender, age, marital status, distance from home to experimental sites, etc. Once the
variables are selected, the treatment group and the comparison group can be constructed by matching each
person in the treatment group with the one in the comparison group that is most similar using the identified
observable characteristics. The result is pairs of individuals or households that are similar to one another as
possible, except on the treatment variable.
Proxy Indicator
An appropriate indicator that is used to represent a less easily measurable one. For example, condition of the
house is a proxy indicator for income.
Purpose
Refers to what the project is expected to achieve in terms of its development outcome. It relates only to the
beneficiaries, a specific area and a timeframe. It also refers to the publicly stated objectives of the development
program or project.
Qualitative Data
Qualitative data deal with descriptions. They are data that can be observed, or self-reported, but not necessarily
precisely measured. They normally describe people's knowledge, attitudes or behaviors, and are not usually
summarized in numerical form. Examples of qualitative data include: The leadership role of women in a
community; Minutes from community meetings; general notes from observations, etc.
Qualitative methods
They belong to the social science tradition and are based on the observation of people in their own territory, and
interaction with them in their own language, on their own terms. Qualitative methods emphasize understanding
reality as the persons being studied construe it. Most qualitative studies rely on descriptive rather numerical or
statistical analysis.
Quality Assurance
It encompasses any activity that is concerned with assessing and improving the merit or the worth of a
development intervention or its compliance with given standards. Examples include: Appraisal, RBM, reviews
during implementation, evaluations, etc. It may also refer to the assessment of the quality of a portfolio and itsdevelopment effectiveness.
8/4/2019 Glossary of Terms - M&E
28/35
28 | P a g e
Quantitative Data
Quantitative data are data that can be precisely measured. They can be measured or measurable by, or concerned
with, quantity and expressed in numbers or quantities. Examples include data on age, cost, length, area, volume,
weight, number, etc.
Quasi-Experimental Design
Impact evaluation designs which create a control group using statistical procedures. The intention is to ensure
that the characteristics of the treatment and control groups are identical in all respects, other than the
intervention, as would be the case from an experimental design.
Rapid Appraisal
Methods first developed in agricultur