Page 1
Dissertation
M&E SYSTEM OF THE PHILIPPINE
DEVELOPMENT PLAN 2011-2016 Supply, Demand, and Use of
Information and Outputs
Maria Sherinna Ysabel JOSE
Master of Development Evaluation and Management
Supervisor: Prof. Dr. Nathalie Holvoet
Academic Year 2013-2014
Page 3
Dissertation
M&E SYSTEM OF THE PHILIPPINE
DEVELOPMENT PLAN 2011-2016 Supply, Demand, and Use of
Information and Outputs
Maria Sherinna Ysabel JOSE
Master of Development Evaluation and Management
Supervisor: Prof. Dr. Nathalie Holvoet
Academic Year 2013-2014
Page 4
i
Acknowledgements
First, I would like to express my gratitude to VLIR-UOS for funding my studies for
this academic year.
I would also like to express my gratitude to my supervisor, Professor Nathalie
Holvoet, for her understanding and guidance throughout the dissertation process. I would also
like to thank all the professors, lecturers and staff at the IOB for the graciousness and warmth
they extended in the last twelve months.
I would also like to thank my classmates, and now my life-long friends, Pam and
Abby for being with me through the worst and best of times. Many, many thanks to this
year‘s IOB Pinoys – Kevin, for your wit and laughter, and Jopay, for your generosity,
patience and care. I could not have asked for better companions to share this truly wonderful
year with.
I would also like to express my deepest gratitude to my friends and family in the
Philippines – Framie, Bernard, Rafa, Cai, Kuya Botchie, Ate Star, and Mom. Thank you for
your encouragement and support.
Lastly, I would like to say thank you to my Dad, who always reminded me to work
hard (and to eat even harder).
Page 5
ii
M&E System of the Philippine Development Plan 2011-2016: Supply, Demand and Use
of Information and Outputs
Table of Contents
Executive Summary . . . . . . . . . . . . . . . . . . . 1
1. Introduction . . . . . . . . . . . . . . . . . . . 3
2. The New Aid Agenda and M&E . . . . . . . . . . . . . . . . . . . 6
3. Some M&E Basics: Definitions, Functions,
Supply, Demand and Use . . . . . . . . . . . . . . . . . . . 9
Definitions . . . . . . . . . . . . . . . . . . . 9
Functions . . . . . . . . . . . . . . . . . . . 10
M&E Supply and Demand . . . . . . . . . . . . . . . . . . . 12
The Supply Side . . . . . . . . . . . . . . . . . . . 12
The Demand Side . . . . . . . . . . . . . . . . . . . 14
4. An Overview of M&E in the Philippine
Government . . . . . . . . . . . . . . . . . . . 18
History of M&E in the Philippine
Government . . . . . . . . . . . . . . . . . . . 18
Monitoring and Evaluation Roles . . . . . . . . . . . . . . . . . . . 21
5. Scope and Methodology of the Diagnostic
Exercise . . . . . . . . . . . . . . . . . . . 22
6. Results of the Diagnostic Exercise . . . . . . . . . . . . . . . . . . . 24
Supply . . . . . . . . . . . . . . . . . . . 24
Plan and Policy . . . . . . . . . . . . . . . . . . . 24
Organization and Coordination Mechanisms . . . . . . . . . . . . . . . . . . . 26
Capacity Building . . . . . . . . . . . . . . . . . . . 29
Demand and Use . . . . . . . . . . . . . . . . . . . 32
Outputs and Dissemination . . . . . . . . . . . . . . . . . . . 32
M&E Linkage to the Budget Process . . . . . . . . . . . . . . . . . . . 33
Use of M&E Information and Outputs by
Parliament . . . . . . . . . . . . . . . . . . . 36
Use of M&E Information and Outputs by
Civil Society . . . . . . . . . . . . . . . . . . . 37
Use of M&E Information and Outputs by
Development Partners . . . . . . . . . . . . . . . . . . . 38
7. Conclusions and Recommendations . . . . . . . . . . . . . . . . . . . 40
Bibliography . . . . . . . . . . . . . . . . . . . 44
Appendix 1: Diagnostic Exercise Checklist . . . . . . . . . . . . . . . . . . . 47
Appendix 2: Diagnostic Tool: The Institutional
Dimension of PRS Montoring Systems (Bedi, et al.) . . . . . . . . . . . . . . . . . . . 48
Appendix 3: Diagnostic Checklist (Holvoet, et al.) . . . . . . . . . . . . . . . . . . . 55
Page 6
iii
Acronyms
ADB Asian Development Bank
BPR Budget Performance Review
CDF Comprehensive Development Framework
COA Commission on Audit
CODE-NGO Caucus of Development NGO Networks
CPBRD Congressional Policy and Budget Department
DBM Department of Budget and Management
DOF Department of Finance
IFAD International Fund for Agricultural Development
IMF International Monetary Fund
M&E Monitoring and Evaluation
NEDA National Economic and Development Authority
NEDA-MES National Economic and Development Authority –
Monitoring and Evaluation Staff
MDG Millennium Development Goal
MfDR Managing for Development Results
MTPDP Medium Term Philippine Development Plan
MTEF Medium Term Expenditure Framework
MFO Major Final Outputs
ODA Official Development Assistance
OECD Organization for Economic Co-operation and
Development
OECD-DAC Organization for Economic Co-operation and
Development – Development Assistance Committee
OPIF Organizational Performance Indicator Framework
PC Planning Committees
PD Paris Declaration
PIB Performance Informed Budgeting
PDP Philippine Development Plan
PMO Project Management Office
PSM Public Sector Management
PRS Poverty Reduction Strategy
PRSP Poverty Reduction Strategy Paper
RBME Results-Based Monitoring and Evaluation
RM Results Matrices
SER Socioeconomic Report
UN United Nations
UNDAF United Nations Development Assistance Framework
Page 7
1
Executive Summary
Monitoring and evaluation (M&E) in the Philippine government has seen
considerable improvements in the last decade, but is definitely still a work-in-progress.
Guided by the principles of the Paris Declaration, especially managing for results, the
government seeks to improve its M&E processes, but more broadly, its whole-of-government
results framework. In 2010, it integrated results into its planning process, through the Results
Matrices (RM) of the Philippine Development Plan (PDP) 2011-2016. As the Plan‘s principal
M&E instrument, the RM enables reporting and assessment of the progress of the country‘s
development strategies. The new M&E system is not without its issues. M&E experts have
put forward that in order to address the challenges of an M&E system, it is helpful to see
M&E as having both supply and demand sides. Any efforts for improvements should
consider looking into the institutional dimensions of designing and implementing an M&E
(supply side), and ensuring the use of M&E information and outputs to inform decision
making processes (demand side). And, to gain a deeper understanding of any M&E system,
the first step is to undertake a stocktaking or diagnosis of its supply and demand sides.
With this in mind, the researcher conducted a diagnosis to identify the strengths and
weakness of the M&E system of the plan. It is hoped that the findings can be used to make
recommendations on which aspects of the M&E system and processes of the Philippine
government can be improved.
Results of the diagnosis are as follows. In the supply side, the government is found to
be weak in the area of policy and plan for it lacks a comprehensive plan that guides the
design and implementation of M&E activities. Capacity, especially to conduct evaluation, is
also seen as a weak link in the supply side. Meanwhile, the M&E system is found to be strong
in terms of its coordination mechanisms and generating buy-in within the government.
In terms of demand, ‗outputs and dissemination‘ was also found to be a weak
dimension. This seems to be a contributing factor for the low utilization of M&E information
and outputs of non-government actors such as the Parliament and civil society. However,
demand for M&E information is high in the budget process, as agencies‘ performance data is
now being used to inform budget decision making.
Both the M&E system‘s supply and demand sides currently face numerous
challenges. In order to further transform it into a more coherent and results-oriented system,
Page 8
2
the government must first undertake a wide-ranging and comprehensive diagnosis of its
current M&E supply and demand arrangements. Results of this diagnosis can form part of a
plan to institutionalize better M&E practices in all of its agencies. A well-functioning
national M&E policy is also critical, as it would create the environment for building capacity
and unifying M&E systems across all levels in government.
Page 9
3
1 Introduction
The monitoring and evaluation (M&E) landscape in the Philippine government
continues to evolve, and while it has consistently been improving in the last decade, there is
no doubt that it is still very much a work-in-progress.
The advancements in the country‘s M&E system are largely influenced and shaped by
the government‘s commitment to international agreements on development cooperation, most
particularly, the Paris Declaration (PD) on Aid Effectiveness, adopted by donors and aid
recipient countries in 2005. Since then, the Philippine government has taken a keener interest
on concepts such as results-orientation, and has been hard at work to incorporate this
principle in every area of its Public Sector Management (PSM). Indeed, upholding these
commitments has created an enabling environment for better development assistance
coordination, and has certainly paved the way for M&E reforms within recipient
governments such as the Philippines, but challenges remain.
The PD is centered around five core principles – ownership, alignment,
harmonization, managing for results, and mutual accountability (OECD, 2005). Out of the
five, managing for results specifically pertains to building and strengthening M&E systems.
With this principle, donors and partner countries should ―manage resources according to
well-defined, desired results, measuring progress toward them and using information on
results to improve decision making and performance‖ (OECD, 2008:44-15). Recipient
countries commit themselves to crafting cost-effective and results-oriented reporting and
performance assessment frameworks, while donors must adopt available in-country systems
and avoid separate reporting requirements (OECD, 2008). Indicators of progress are as
follows: ―number of countries with transparent and monitorable performance assessment
frameworks to assess progress against (a) the national development strategies and (b) sector
programmes‖ (OECD, 2008:15). The indicator is concerned with quality, stakeholder access
to information, and extent of utilization of the information generated from the results-based
system.
The evaluation of the progress of the implementation of the PD in the Philippines
(2008 PD survey) (OECD, 2008) explores some of these issues related to M&E. According to
this evaluation, for the ‗managing for results‘ indicator, the Philippines made little progress in
terms of forming its whole-of-government results framework. The government enumerated
several challenges: new data collection and monitoring processes are difficult to adopt; roles,
Page 10
4
functions and institutional arrangements need to be properly defined; a lack of incentives to
fully apply new systems; weak accountability to achieve results starting at the outcome level
(OECD, 2008).
That is not to say that gains have not been made. In 2010, as the new presidency came
in, and new national development strategies were to be drafted, the Philippine government
also introduced the Results Matrices (RMs), to accompany the Philippine Development Plan
(2011-2016) (simply known as the Plan or PDP). The RMs consists of an indicator
framework for evaluating development outcomes and impacts, with the overall aim to
integrate results to the strategies and programs indicated in the PDP. It also aimed to serve as
the overall monitoring and evaluation tool to track progress of the Plan (NEDA, 2011a). The
mechanism for reporting progress of the Plan (at the level of sector and sub-sector outcomes)
is done through the annual Socio-Economic Report (SER) and mid-term plan updating
(OECD, 2008).
Despite this progress, in the subsequent 2011 PD Survey, the Philippines still received
a score of C1 for its results-oriented monitoring framework, falling short of the 2010 target of
a score of A or B. Even with measurable targets clearly defined in the Plan and the RM,
another issue was ensuring that the available statistics (sector and subsector) are responsive to
measuring progress against Plan targets. Another major challenge was establishing the link
between program and project outputs to subsector and sector level outcomes (OECD, 2011).
Today, the subject of building more effective M&E systems is seen to be a relevant
research agenda, which can highly contribute to the goal of building better development
policies. With this in mind, the first significant step is to learn about what works (and what
does not) in existing M&E systems, and to also try to answer: why? As affirmed by Holvoet
and Inberg (2012), ―regardless of the approach adopted, an important step in any M&E
capacity-building effort is to take stock of what already exists at the M&E supply and
demand side‖ (Holvoet and Inberg, 2012:5). The authors further acknowledge that engaging
in this kind of exercise is ―consistent with the idea that small incremental changes to existing
systems might be more feasible and workable than radical and abrupt changes that seek to
impose blueprints from the outside‖ (Holvoet and Inberg, 2012:5).
1 Assessment of results-based frameworks is based on the LEADS method: A - sustainable; B - largely developed; C - action
taken; D - elements exist; and, E - little action (World Bank, 2007).
Page 11
5
Based on the results of the 2008 and 2011 PD survey, there is clearly much to be done
to improve the M&E component of the Philippines‘ current national plan. In this paper, we
will take on this ‗important step‘ to M&E strengthening, and examine the strengths and
weaknesses of said M&E system – both its supply and demand sides, through a diagnostic
exercise. The results of this exercise will thus be presented in this paper.
The researcher would like to note certain limitations to this study. First, it is purely a
desk review of documents, which includes academic literature, as well as pertinent policy
papers, reports, and forms requested from various Philippine government agencies and that
are publicly available online.
It is hoped that whatever can be gleaned from this diagnostic will be of use to
strengthening the M&E system of the current Plan (in its two remaining years), and perhaps,
even guide the implementation of a more concrete M&E plan for the subsequent Philippine
Development Plan (2017-2022). I would like to note that as some of the findings in this paper
will be drawn from my own experience as part of the Government of the Philippines, it can
also be considered as a self-assessment.
This paper is structured in the following manner: Chapter 2 explains the context of the
new aid approach, and how this affected the M&E landscape in developing countries such as
the Philippines. Chapter 3 will be a review of basic M&E notions, including definitions,
functions and uses, as well as the concept of supply and demand for M&E. Chapter 4 gives a
an overview of the M&E landscape in the Philippines. Chapter 5 explains the scope and
methodology used for the diagnostic exercise. Chapter 6 presents the results of the diagnostic
exercise. Chapter 7 concludes the findings of the paper and proposes recommendations and
next steps to further improve the Philippine government‘s M&E system.
Page 12
6
2 The New Aid Agenda and M&E
The end of the nineties saw a shift in the way development strategies are
implemented, with the introduction of models such as the World Bank‘s Comprehensive
Development Framework (CDF), which relates a set of principles (e.g., results, country
ownership, partnership, transparency) to guide the provision of external assistance as a means
to reduce global poverty inequity (World Bank, 2014a). Following the CDF, and to serve as a
concrete action plan to the CDF, the Poverty Reduction Strategy Papers (PRSPs) approach
was launched by the World Bank and the IMF (Stern, 2008; Holvoet and Renard, 2006). The
PRSP process was not exclusively focused on aid (as a range of country policies affecting
poverty reduction are discussed), but it incorporates a number of CDF principles, while also
representing the IMF and World Bank‘s commitment to pro-poor funding (Stern, 2008). The
PRSPs, prepared by recipient governments every three to five years, are rooted in five core
principles: country-driven (promotes national ownership of strategies); result-oriented
(focuses on outcomes that will have an impact on the poor); comprehensive (recognizes
multi-dimensionality of poverty); partnership-oriented (involves coordination among
government, donors, and other domestic stakeholders); and, based on a long-term perspective
(benefits are sustained, leading to poverty reduction) (IMF, 2014).
According to Stern (2008), the PRSPs are also intended to highlight issues of
governance, institutional capacity building, and mutual accountability. But above all, the
PRSPs must emphasize broad-based participation within the recipient countries. The author
adds: ―To an extent conditionality is traded off against participation: when participation is
strong, conditions are less stringent‖ (Stern, 2008:9). With successive reviews of the PRSP
process, it eventually became the strategic and implementation vehicle by which the priorities
and targets of Millennium Development Goals (MDG) were to be achieved (Stern, 2008).
By 2005, the main multilateral development actors now had a generally consistent
understanding, and indeed, even a common set of language for aid and development. The
various initiatives (CDF, PRSPs) and policy statements (Millennium Development
Declaration (2000); Monterrey Consensus (2002); Rome Declaration (2003); and, Joint
Marrrakech Memorandum (2004), among others) have led up to and cohered into a model of
how development and aid should be understood, managed and delivered – the ―new aid
paradigm‖. This coherence is adequately captured in the Paris Declaration on Aid
Page 13
7
Effectiveness of 2005 (and subsequently reaffirmed in the Accra Agenda for Action in 2008)
(Stern, 2008).
The Paris Declaration (PD) provides a practical roadmap to improve the quality of aid
and strengthen its impact on development. Much like the CDF of the late nineties, the PD is
grounded on similar five core principles: ownership (recipient countries will exercise
effective leadership over development plans, policies, and strategies to reduce poverty);
alignment (donors will base their support on recipients‘ development goals and strategies,
procedures and institutions); harmonization (donor countries will coordinate so that aid
delivery processes are simplified, harmonized and not duplicated); managing for results
(developing countries will shift focus on attaining development results and getting results
measured); and, mutual accountability (both donor and recipient countries pledge to be
mutually accountable for implementing aid commitments) (OECD, 2005:3-8).
Beyond what is written in policies and international consensus, however, the new aid
paradigm is expected to change the way donors manage their aid and development
cooperation activities. Holvoet and Renard (2006), for instance, note certain practical
considerations relating to the type of modality aid is going to be delivered, as well as the type
of results that donors and countries are expected to track: ―At a time when donor agencies are
themselves pushed towards more performance management involving increasing demands of
accountability towards their own constituencies, the new aid paradigm propagates a shift
from donor to recipient control, from ‗identifiable‘ project aid activities to non-earmarked,
common pool funding, and from a focus on inputs to difficult attributable outcomes‖
(Holvoet and Renard, 2006:67).
M&E is seen as one of the most crucial and invaluable element to the new aid agenda
(Holvoet and Rombouts, 2008). Holvoet and Renard (2006) point out, ―the consequence of
the new aid paradigm, if it can impose itself, for monitoring and evaluation (M&E) are huge
and daunting‖ (Holvoet and Renard, 2006:67).
With this new aid landscape, recipients are now compelled to abide by the
conditionality of implementing nationally-led planning, budgeting and M&E systems. And
with the centrality of concepts as results-based management, policy making processes are
now expected to be evidence-based, and are thus entirely dependent on how well M&E
systems work. This includes processes such as information collection, analysis and feedback.
Page 14
8
The PRSP process, in particular, was expected to steer the establishment of these systems
towards the right direction (Holvoet and Renard, 2006).
Meanwhile, the donors were also expected to relinquish control of the M&E of their
programs and projects, and concede to the idea of country-led M&E systems. ―This supposes
a considerable degree of trust in the willingness and capacity of the government to build up a
convincing national system and to perform the demanding task of monitoring and evaluation
according to international standards‖ (Holvoet and Renard, 2006:67)
If PRS studies (Holvoet and Renard, 2006; Holvoet and Rombouts, 2008; Bedi, et al.,
2006) are to be taken as evidence, monitoring and evaluation under the PRSP seems to be
facing the toughest challenges. However, there is room for optimism, as gains (although
minimal) have also been achieved. First, it is felt that the approach has at least led to an
upsurge in data collection related to poverty statistics. In addition, with more attention given
to qualitative issues, there is an opportunity to explore mixes method approaches combining
both quantitative and qualitative aspects of research. Third, there Since the PRSP approach
assigns quite a significant role to a country‘s Ministry of Planning or Finance, this should
improve the alignment between the different components of public sector management –
planning, budgeting and M&E (Holvoet and Renard, 2006). This arrangement is expected to
raise the probability of the use of M&E findings, which, in the most ideal setting, is the
yardstick for a successful M&E system (Holvoet and Renard, 2006; Mackay, 2007).
While the Philippines is not a PRS country, it does have its own poverty reduction
strategies, which is contained in its current national development plan – the Philippine
Development Plan (2011-2016). As earlier noted, the Philippine government has committed
to uphold principles of international agreements, foremost of which is the Paris Declaration
for Aid Effectiveness. This continues to provide an enabling environment, as well as an
impetus to pursue its M&E reform agenda of strengthening its national-level M&E. With the
principle of results management in mind, the government has then set out to create an M&E
system for the Plan, primarily through the Results Matrices (RMs). It has also started to
establish the linkage between the RMs and its budgetary tools. These related initiatives will
be the subject of a diagnostic exercise in the latter part of this paper (see Chapter 6).
Page 15
9
3 Some M&E Basics: Definitions, Functions, Supply and Demand
In this chapter, we review the different notions of M&E – its basic definitions, as well
as its functions to support better government. Also, we explore the concepts of supply and
demand of M&E, and how both could become integral in the success of an M&E system.
Definitions
Monitoring is defined as: ―a continuing function that uses systematic collection of
data on specified indicators to provide management and the main stakeholders of an ongoing
development intervention with indications of the extent of progress and achievement of
objectives and progress in the use of allocated funds‖ (OECD-DAC, 2002:27-28).
Meanwhile, evaluation, is defined as: ―the systematic and objective systematic and
objective assessment of an ongoing or completed project, programme or policy, its design,
implementation and results. The aim is to determine the relevance and fulfillment of
objectives, development efficiency, effectiveness, impact and sustainability‖ (OECD-DAC,
2002:21).
Put together, Dale (2004) describes monitoring and evaluation (M&E) as being:
―undertaken to find out how a programme or project performs or has performed, including
reasons for aspects of performance, whether positive or negative‖ (Dale, 2004:50).
Juxtaposed together, it is clear to see that the two are distinct but complementary
functions. Monitoring intends to descriptive, and mainly answers the issue of where a set of
policies or programs is at relative to targets and outcomes, at a given time. On the other hand,
evaluation tries to address the question of causality, providing evidence why targets and
outcomes are either being met or not met (Kusek and Rist, 2004).
But more specifically, in the context of the new aid paradigm and its emphasis on
results-orientation, M&E systems would then pertain to results-based M&E (RBME)
systems. Indeed, in this paper, the focus and subject of the study is RBME, as a number of
governments in the developing world, including the Philippine government, has chosen to
adopt this term to refer to its own M&E system.
RBME is thus defined as: ―a special public management tool governments can use to
measure and evaluate outcomes, and then feed this information back into the ongoing
processes of governing and decision making‖ (Kusek and Rist, 2004:12). RBME moves away
Page 16
10
from mere tracking of inputs and outputs, and as Kusek and Rist (2004) put it: ―of particular
emphasis here is the expansion of the traditional M&E function to focus explicitly on
outcomes and impacts‖ (Kusek and Rist, 2004:13).
Functions
In recent years, more and more governments in developing countries are recognizing
the important role monitoring and evaluation (M&E) systems play in supporting better
government. M&E systems help governments understand their performance (whether it is
good or poor) and measure the quantity and quality of outputs (i.e. goods and services) they
provide to their citizens. With the focus on results in the new aid reform agenda, developing
countries‘ governments are now hard at work to build so-called ‗results-based‘ M&E
systems, to serve the function of tracking not only outputs, but to also analyze whether these
are translated into outcomes and long-term positive impacts (Mackay, 2007).
Holvoet and Rombouts (2008) summarize the functions of M&E, as follows: ―the
fulfillment of ‗accountability‘ towards funders, taxpayers and citizens on one hand, and, on
the other, ‗lesson learning‘ and ‗feedback‘ towards management and policy makers (with the
final aim to improve further interventions). Taking these three functions together, M&E
systems have the potential to be a powerful tool to support better government (Mackay,
2007).
In terms of accountability, Mackay (2007) notes that M&E indeed plays a role in
enhancing accountability and transparency relationships between donors and recipient
governments; between donor countries and their constituencies; between recipient
governments and their own citizens/beneficiaries, Parliaments and civil society; and even
within government, for example, between oversight agencies and sector ministries. In
development, accountability is more precisely referred to as ―the obligations of partners to act
according to clearly defined responsibilities, roles and performance expectations, often with
respect to the prudent use of resources. For evaluators, it connotes the responsibility to
provide accurate, fair and credible monitoring reports and assessments. For public sector
managers and policy-makers, accountability is to taxpayers/citizens‖ (OECD-DAC, 2002:15).
Meanwhile, feedback, as one crucial function of evaluation is defined as ―the
transmission of findings generated through the evaluation process to parties for whom it is
relevant and useful so as to facilitate learning. This may involve the collection and
Page 17
11
dissemination of findings, conclusions, recommendations and lessons from experience‖
(OECD-DAC, 2002:23). Related to this function, Mackay (2007) adds that M&E plays a role
in supporting policy making, related to national planning and budget decision making. M&E
findings can inform deliberation processes and provide evidence about which types of
government interventions – employment programs, conditional cash transfers, or health
programs, among others – can be most cost-effective. Most common these days is using
M&E information in a practice called performance-based budgeting, in scenarios where
governments have to make priorities out of competing spending proposals. In performance
budgeting, there is often a formula-driven relationship between budget and results, that is,
budget allocation is based on a program‘s performance as measured by its outputs and
outcomes. In most cases, however, program results are not the sole input to determine budget
allocations; other considerations, for instance, a president‘s policy priorities, are also taken
into account (Mackay, 2007).
In terms of its learning function, the term lessons learned could be described as:
―Generalizations based on evaluation experience with projects, programs, or policies that
abstract from the specific circumstances to broader situations. Frequently, lessons highlight
strengths or weaknesses in preparation, design and implementation that affect performance,
outcome, and impact‖ (OECD-DAC, 2002:26). According to Mackay (2007), M&E helps
inform managers about the most efficient use of resources at the sector, program and project
levels. Performance indicators can be used to make cost and performance comparisons, and
can eventually determine good and bad practices, and reasons for such. Evaluations, in
particular, are used to identify these reasons. The same author equates the learning function
of M&E to what is known in development as Managing for Development Results (MfDR)
(Mackay, 2007).
By now, we already know the importance and functions of M&E systems to support
overall public sector management. But Bedi, et al. (2006) suggest that it is ever more
indispensible for governments who have planned and are trying to implement their poverty
reduction strategy. M&E systems are seen to be: ―a pillar upon which a [poverty reduction
strategy] can be elaborated; it helps open the policy space for dialogue, establish priorities,
design programs and policies, set realistic targets, and assess implementation with a view to
refine the strategy‖ (Bedi, et al., 2006:9).
Page 18
12
However, studies conducted on PRS countries have shown that achievements in terms
of improving M&E systems have been limited (Bedi, et al., 2006). Although the PRSP
approach ultimately seeks enhancements in M&E, it has set out rather unrealistic goals that
put more strain on countries‘ existing national M&E systems (Bedi, et al., 2006; Holvoet and
Renard, 2006). M&E remains to be the ―weaker part of the new aid architecture‖ (Holvoet
and Renard, 2006:66).
M&E Supply and Demand
Bedi, et al. (2006) suggest that in order to surpass the major practical challenges in
M&E today, it is helpful to look at M&E as having two sides: supply and demand. If
governments seek to improve their existing national M&E systems, they should focus on the
institutional rather than the technical dimensions of M&E, that is, designing and
implementing a coherent system for M&E across all sectors contained in the strategies (i.e.
the supply side), and encouraging the use of M&E information to guide the development of
poverty reduction programs (i.e. the demand side) (Bedi, et al., 2006).
The Supply Side
Most governments who embark on implementing their policies and strategies would
already normally have M&E mechanisms in place, for example, census programs at the
national level, administrative data systems at the sector level, and even individual project-
level systems. However, they are operated independent of each other; in other words, M&E
arrangements are dispersed and fragmented. Governments must then strive to design and
implement a coherent system of M&E across all sectors, that is, strengthen the supply side of
M&E. Creating a unified M&E system involves two processes: rationalization and
coordination (Bedi, et al., 2006).
First, a unified system should involve rationalizing M&E activities that are already in
place, instead of introducing new ones. The process of rationalization could include
consolidation of duplicated activities, adoption of universal definitions for all involved actors,
reduction in the number of data systems, among others. Again, the ideal scenario is lightened
administrative burden, and adding new processes and activities, although they may be more
technically sound, could be more detrimental than supportive of the M&E system (Bedi, et
al., 2006).
Page 19
13
Next, an M&E system also relies on coordinating and defining relationships among
the different actors involved within the field. The M&E system does not need to be housed in
one central agency; rather, it should consider distribution of responsibilities to different
agencies. With this arrangement, transparency and accountability for their own performance
are strengthened as well (Bedi, et al., 2006). The success of an M&E system is also
dependent on being able to generate buy-in from all stakeholders (Kusek and Rist, 2004).
―Ideally, the institutional design should emerge out of a shared commitment to solving the
practical problems of PRS implementation. Systems are consensual in nature and tend to
function only if participants find them useful and legitimate and agree on a common purpose‖
(Bedi, et al., 2006:xvii).
Experience from well-functioning M&E systems suggests that there are certain
elements required in order to achieve success in institutional design.
First is a strong political leadership. The key to a good institutional design is to place
the lead as close as possible to the center of government, for example, in the ministry of
finance or planning. This gives the system greater authority, and at the same time, makes
easier the creation of links to policy and budgetary processes (Bedi, et al., 2006). A
‗champion‘ is also vital – an influential senior official who will be able to push the M&E
agenda and its institutionalization, win over colleagues about its priority in sound public
sector management, and dedicate resources to support creation of the M&E system. This
champion is not only a personality, but an official who has an understanding of M&E tools
and methods, and an appreciation of M&E‘s significance and usefulness to improve
government (Mackay, 2007; Kusek and Rist, 2004).
Another component of an M&E system is establishing linkages with line ministries or
sector agencies, and strengthening their existing M&E systems. ―Where sectoral monitoring
itself is weak, the PRS monitoring system may need to include an active strategy for
promoting sound monitoring practices, such as rules requiring that monitoring and evaluation
functions be incorporated into departmental budgets, work plans, and job descriptions.
Ensuring that the needs of the PRS monitoring systems of donors are aligned with sectoral
information systems is likely to increase compliance and performance‖ (Bedi, et al.,
2006:xix).
Third, an M&E system should strongly involve statistical agencies. As one moves up
the results chain, the task of monitoring and evaluating results would more likely rely on data
Page 20
14
that can only be provided by a statistics office. Thus, the role of national statistical office
becomes even more significant for a national-level M&E system, where the concern is
generation and analysis of data at the outcome and impact levels (Edmunds and Marchant,
2008). Arrangements should ensure complementarities and coordination between the central
M&E unit and the statistical office ―so that they support each other rather than compete or
conflict particularly in avoiding the development of parallel information systems (Edmunds
and Marchant, 2008:39).
The Demand Side
Mackay (2007), however, argues that M&E systems, on their own, do not have
intrinsic value. While it is highly important that governments devote their efforts in building
and strengthening their M&E systems (the supply side), evidence suggests that only those
governments that heavily utilize and analyze M&E information have been able to
significantly improve the performance of their policies and programs (Mackay, 2007). These
issues refer to the demand side of M&E.
―Efforts to build an M&E system will fail unless real demand exists or can be
intentionally created, especially by ensuring that powerful incentives are in place to conduct
and use M&E‖ (Mackay, 2007:53). While creating a policy or an edict that pronounces M&E
is important, it is not enough without action to accompany it. Alone, a policy will unlikely
produce quality M&E outputs, and usually only leads to lip service, ritual compliance or
active resistance (Mackay, 2007).
Both Mackay (2007) and Bedi, et al.,(2006) agree that there are indeed technical
aspects to M&E that need to be built and managed carefully, but this must not be seen as
purely technocratic work, like that of creating a financial management or a procurement
system. Mackay (2007) points out that ―[…] technocratic emphasis is highly inadequate if it
ignores the factors that determine the extent to which M&E information is actually used.
Where an M&E system is underutilized, this not only constitutes a waste of resources, but it
also likely to seriously undermine the quality of the information the system produces. It also
throws into question the sustainability of the system itself‖ (Mackay, 2007:1-2).
Porter and Goldman (2013) add another important point about the issue of demand for
M&E. According to the authors, there is demand for M&E when decision-makers, either in
political or bureaucratic positions, are willing to use evidence gathered from M&E systems to
Page 21
15
support their decision making processes. However, ―for the M&E system to be used and
sustainable, it is important that demand is endogenous to the governance context in which it
is operating, as opposed to arising from structures external to the system, such as donors
(exogenous)‖ (Porter and Goldman, 2013:2). Building demand is thus an exercise in
establishing linkages, first and foremost, among in-country stakeholders, and between the
M&E system and formal government policy processes where M&E information is likely to
be influential (Bedi, et al., 2006; Mackay, 2007).
Mackay (2007) explains that one barrier to M&E demand is stakeholders‘ lack of
knowledge regarding what M&E encompasses, especially where buy-in of senior government
officials is indispensable before a significant amount of effort is to be put into the creation, as
well as funding, of an M&E function. This is an illustration of a chicken-and-egg problem.
―There is a lack of government demand for M&E because of the lack of understanding of
M&E and what it can provide; there is a lack of understanding because of the lack of
experience with it; and there is a lack of experience because of weak demand‖ (Mackay,
2007:53).
The author offers a solution to this issue: raising awareness about M&E within the
government. Demand can be created when key officials begin to better understand M&E
(including its tools and techniques), when they are exposed to cases of cost-effective M&E
systems and reports, and when they are shown examples of other governments who have
been able to set up well-functioning and highly-valued M&E systems (Mackay, 2007).
The work of Bedi et al., (2006) further explains how to encourage greater use of
monitoring and evaluation outputs.
First, if PRS M&E information is to guide policy making, the system must not only
concentrate on monitoring, but take on analysis and evaluation functions as well. Analytical
units should be kept small, placed close to central decision making bodies, and focus purely
on analysis. This type of arrangement has shown to work in a number of PRS countries such
as Bolivia, Nicaragua, Tanzania and Uganda (Bedi, et al., 2006).
Evaluation capacity development is also seen as part of the broad effort to enhance
public sector management systems geared towards achieving country goals. When
governments are able to conduct their own evaluations and use evaluation results
appropriately, this can contribute to better planning, policy making and management of
Page 22
16
development interventions, thereby improving effectiveness and domestic accountability
systems (OECD, 2009).
Even with sound analysis and capacity to evaluate, however, the issue at hand still
remains – how do we make sure that M&E information is effectively used?
To address this issue, monitoring, analysis and evaluation information must be
compiled into outputs, and disseminated to relevant stakeholders within and outside
government. Suppliers of M&E outputs (usually in the form of annual progress reports),
should ensure that reports are written in a concise manner, contain strong analysis, and avoid
usage of technical and complicated donor language. If there are multiple audiences, reports
must be packaged in different formats, taking into account each of their interests, preferences,
functions and information needs. Further, decision makers usually look for recommendations
or some indications of future actions required in relation to findings. Suppliers of M&E
information would then need to provide a range of estimates of the costs and consequences of
implementing certain recommendations (Kusek and Rist, 2004).
Finally, an important consideration for dissemination is timing. Release of outputs
should be synchronized with the timing of key events in the policy cycle (e.g. budget
submission, MTEF reviews, parliamentary budget reviews) (Bedi, et al., 2006).
As earlier noted, M&E plays many significant roles in the public sector. And it has
been found that one of the most crucial elements of running an effective public sector is the
use of M&E to inform the spending of public funds (Krause, 2010). The use of M&E as a
tool for budgeting, usually referred to as performance-based budgeting, is an essential
component of the success of a PRS M&E system (Bedi, et al., 2006).
Performance-based budgeting ―aims to improve the efficiency and effectiveness of
public expenditure by linking the funding of public sector organizations to the results they
deliver, making systematic use of performance information‖ (Robinson and Last, 2009:2).
Bedi, et al. (2006) offer insights about the link between M&E and the budget process.
―The need to access public resources creates powerful incentives across all public agencies
and provides the most promising hook for creating demand for effective monitoring.
Moreover, unless the link is established, the PRS monitoring system will fail to meet its
central objective of information-based decision making because budgets are the central
mechanism for policy implementation‖ (Bedi, et al., 2006:42).
Page 23
17
In the budgeting process, priority should be given to programs and projects that have
shown to have an impact on reducing poverty. The M&E system should be able to capture
this information, and therefore guide budget decisions (Krause, 2010).
In OECD countries, performance-based budgeting usually comes in the form of
performance-informed budget process. In this model, performance indicators are integrated
into the budgets, where M&E is a built-in analytical tool within the budget cycle. Other
countries complement traditional line item budgeting with data on individual agency
performance (Krause, 2010).
In PRS countries, the most probable strategy is the ―introduction in the rules and
procedures surrounding the budgetary process of a requirement that spending agencies justify
their resource bids according to PRS priorities and the evidence on past program
performance‖ (Bedi, et al., 2006:43). This kind of strategy is likely to work on countries with
a Medium Term Expenditure Framework (MTEF) (Bedi, et al., 2006; Robinson and Last,
2009). The MTEF is a tool that aims to improve expenditure prioritization, and it is in the
process of determining these priorities where performance information are most useful and
crucial (Robinson and Last, 2009).
If an MTEF has not been introduced, there are alternative opportunities in the annual
budget process and the preparation of public investment plans. Basically, when sector
agencies are bidding for resources, there is an opportunity to promote the use of M&E data,
as these could serve as evidence to prove that their proposed programs or strategies have the
potential to contribute to the goal of poverty reduction (Bedi, et al., 2006).
Bedi, et al., (2006) also give caution about linking the M&E system to the budget.
M&E data might not always be sufficiently accurate or suitable to feedback into expenditure
priorities. Attribution of results to spending also becomes an issue, especially when not just
one, but a number of programs (some may even be implemented outside of budgets) all
together influences achievement of outcomes.
Page 24
18
4 An Overview of M&E in the Philippine Government
A. History of M&E in the Philippine Government
The start of the growth of M&E in the country can be traced back to the early 1990s.
With the increasing size of the Official Development Assistance (ODA) portfolio extending
to sectors other than infrastructure, an organic office in the government was created (NEDA-
Project Monitoring Staff), which was tasked primarily to monitor the implementation
progress of ongoing development projects (or project-level input-output M&E). The office‘s
work focused on tracking efficiency indicators of major infrastructure projects and
facilitation of implementation bottlenecks (NEDA, 2011b).
In the late nineties (1996-1999), the passage of the ODA Act of 1996, as well as the
NEDA Board Resolutions of 1999 facilitated the initial conduct of results monitoring and
evaluation on major development projects. Meanwhile, development partners also turned
their orientation to results and outcomes, granting several TAs to the government to build its
results-orientation capacity. There was also a growing interest on ex-ante and ex-post
evaluation on the part of implementing agencies (NEDA, 2011b).
In the early 2000s, the government operated on a fiscal deficit environment, which
called for efficiency measures in operating both ODA and locally-funded projects. The
oversight agencies, NEDA (planning), DBM (budgeting), and DOF (financing) spearheaded
the implementation of several donor-assisted grants that introduced a sectoral approach to
planning, budgeting and M&E. During this period, the Medium-Term Expenditure
Framework was crafted, to ensure that the allocation of resources is informed by strategic
policy priorities indicated in the Medium Term Plan. With this sectoral approach, the
accountability for delivering cost-efficient sector outputs was shared among agencies
belonging in the same sector (NEDA, 2011b).
Between 2005 and 2008, the Philippines endorsed several international commitments
on development effectiveness (e.g., Paris Declaration (2005), Accra Action Agenda(2008)).
Focus of development and aid practices thus turned to the principles of harmonization,
alignment and managing for development results (MfDR) (NEDA, 2011b).
In the Philippines, this translated into the conduct of joint analytic works and reviews,
use of strengthened country systems, and over all, better coordination between government
and development partners, and harmonized processes even among the donor community. The
Page 25
19
shift in government‘s focus towards MfDR also put an emphasis on evaluation as an effective
tool for measuring performance and tracking of results (NEDA, 2011b).
By 2009, earlier initiatives on project-level and sector-oriented RBME were
continually enhanced and started to converge into country capacity, sufficient enough to aim
for a higher level development evaluation objective of achieving an integrated sectoral RbME
framework. In terms of the M&E of the ODA portfolio, tracking of sectoral, outcome-level
results gained priority. Accountability for development results is now shared by government
and development partners (NEDA, 2011b).
Today, the work of tightening the results framework continues. In 2010, the
government integrated results into the Philippine Development Plan through the introduction
of the Results Matrices. As an M&E tool, it provides an indicator framework to the Plan, and
allows for the monitoring and assessment of progress of indicator targets.
B. Monitoring and Evaluation Roles
B.1 Implementing Agency level
Implementing agencies conduct the M&E of their own programs and projects. A 2012
survey of 24 line agencies reveal that each one has its own M&E arrangements, but three
general categories can be made: (a) embedded within a project management office, (b) fall
within the responsibility of the planning or a specialized unit within the implementing
agency, or (c) exercised by both the project management office and other units within the
implementing agencies (NEDA, 2012).
B. 2 Oversight Agency Level
Department of Budget and Management
The Department of Budget and Management (DBM) is an executive body in the
Philippine government, whose mandate is to promote efficient management and utilization of
resources in all of government. DBM‘s functions include the preparation of the medium-term
expenditure plan and formulation of the annual national budget, ensuring that funds are
prioritized and allocated to support the annual program of the Philippine government (DBM,
2014a). The M&E function of DBM is carried out through the Budget Performance Review
(BPR), which tracks the budget execution of all government departments. The BPR is
Page 26
20
focused on tracking agencies‘ ability to deliver Major Final Outputs (or MFOs in the form of
goods and services). It is done during the middle and end of each year. The BPR is conducted
through collection of financial and physical performance data (MFO performance indicators
and targets) captured from departments‘ budget accountability reports (NEDA, 2012).
Commission on Audit
The Commission on Audit (COA) is an independent constitutional office within the
Philippine government, tasked to ―audit all accounts pertaining to all government revenues
and expenditures/uses of government resources and to prescribe auditing rules‖ (COA, 2013).
COA consolidates audit observations and makes recommendations in a report, which are
grouped according to themes such as budget, procurement, civil works and goods, financial
and physical performance, and project sustainability (COA, 2013).
National Economic and Development Authority (NEDA)
At the oversight agency level, M&E of ODA is largely done by National Economic
and Development Authority (NEDA). NEDA is the country‘s social and economic
development planning and policy coordinating body.
The bulk of M&E work of NEDA relates to the conduct of monitoring (and some
evaluation) of ongoing ODA programs and projects. Republic Act (RA) No. 8182 (ODA Act
of 1996) mandates NEDA to conduct an annual review of the status of all projects financed
by ODA, identify implementation issues and reasons for bottlenecks, and assess continued
project or program viability. The agency is required to submit to Congress a report on the
outcome of the review not later than June 30 of each year. Within NEDA, these M&E tasks
primarily rest on the Monitoring and Evaluation Staff or NEDA-MES (formerly, Project
Monitoring Staff). Other staffs, meanwhile, are involved in the ex-ante evaluation (a critical
step before a program or a project is approved for implementation) and re-evaluation of
programs and projects (in case of requests for restructuring, extension of loan validity, among
others). NEDA regional offices are responsible for the M&E of programs and projects
implemented within their regions. This is done through a mechanism called the Regional
Project Monitoring and Evaluation System, overseen by the National Project Monitoring
Committee, whose secretariat is also the NEDA- Monitoring and Evaluation Staff.
Page 27
21
The NEDA also oversees the crafting of the Philippine Development Plan, and in the
current planning cycle (2011-2016), it introduced an M&E mechanism for the plan, through
the Results Matrices.
Page 28
22
5 Methodology and Scope of the Diagnostic Exercise
In this study, the researcher chose to utilize the M&E Institutional Dimension of PRS
Monitoring System Diagnostic Tool by Bedi, et al. (2006) as the main basis of the diagnostic
exercise, but it has been modified to incorporate some of the elements from the M&E
Diagnostic Checklist by Holvoet, et al. (2012) (See Annex 1).
The first checklist was developed by Bedi, et al. (2006) to generate information
related to the current monitoring systems of PRS countries – their demands, activities and
capabilities of stakeholders. The checklist is divided into three general components: (a) the
institutional context and design of the PRS; (b) the ability of the PRS monitoring system to
supply information; (c) the demand for and use of PRS monitoring system information (See
Annex 2).
The three components are further subdivided into the following. The first component,
the institutional context and design of the PRS, is sub-categorized into six dimensions: (i)
design process; (ii) institutional leadership; (iii) coordination; (iv) oversight; (v) legislation
and regulation; (vi) national statistics.
The second component, ability to supply information, is sub-categorized into seven
dimensions: (i) capacity for data production; (ii) sources of data; (iii) relevance; (iv) capacity
for analysis; (v) capacity for evaluation; (vi) outputs and dissemination; (vii) capacity
building and funding.
The third component, demand for and use of the PRS monitoring system information,
is sub-categorized into five dimensions: (i) use in budget and planning; (ii) use in line
ministries; (iii) use in parliament; (v) use in development partners; (vi) use in civil society.
Meanwhile, the second checklist was developed by Holvoet, et al. (2012), and used by
the authors to take stock of Poverty Reduction Strategy Papers‘ M&E arrangements in 20
sub-Saharan African countries. This tool includes 23 questions, sub-categorized by six M&E
dimensions: (a) policy, (b) indicators, data collection and methodology, (c) organizational
issues, (d) capacity-building, (e) participation of non-governmental actors, and (f) use
(Holvoet et al, 2012) (See Annex 3).
Page 29
23
Merging elements from these two checklists, the researcher came up with a modified
checklist, which is divided into two major categories: first, supply side questions and second,
demand and use questions.
Supply side questions are further divided into three components: (a) plan and policy;
(b) organization and coordination mechanisms; and, (c) capacity building.
Meanwhile, the demand and use questions are subcategorized into the following
dimensions: (a) outputs and dissemination; (b) links to the budget process; (c) use of M&E
information and outputs by parliament; (d) use of M&E information and outputs by civil
society; (e) use of M&E information and outputs by donors.
The aim is not to answer each question in the original diagnostic checklists (as they
are quite numerous and repeating), but they will serve as guide in going through each of the
dimensions.
Before proceeding to the results, the researcher would like to note that this diagnosis
is subject to several limitations. First, the diagnosis was done only for the M&E system of the
current national plan – Philippine Development Plan 2011-2016. Second, it largely
emphasizes the monitoring aspects of the M&E system, as this has also been the focus of the
government‘s M&E function related to the Plan. Third, the basis of this diagnosis was a desk
review of documents requested from various Philippine government agencies, as well as
those that are publicly available online. Lastly, as the researcher is part of the Philippine
government, it can also be considered a self-assessment of its current M&E situation.
While the scope of the diagnostic exercise is limited, it is hoped that whatever can be
gleaned from this exercise, in terms of the method or the actual results of the review, may be
of use to strengthen the current M&E systems of NEDA. It is also hoped that it can add to the
very limited literature and studies on the Philippine M&E experience.
Page 30
24
6 Results of the Diagnostic Exercise
Below are the results of the diagnostic exercise. The discussion is divided into two
major categories: first, supply dimensions, and second, demand and use dimensions.
A. Supply Dimensions
A.1 Plan and Policy
For the M&E of its national development strategies, the government does not provide
a comprehensive M&E plan or a formal document that outlines an M&E plan. It has,
however, an indicator measurement framework formally written up, in the form of the
Results Matrices (RM), an accompanying document to the PDP 2011-2016.
The RM is a tool designed to provide results-orientation to the current Plan. As it is
grounded on the principle of results-based management, it shifts the government‘s focus,
from mere input-output monitoring to an emphasis on the achievement of outcomes and
impacts. As the Plan‘s principal M&E instrument, the RM enables reporting and assessment
on the progress of the Plan (NEDA, 2011a). The RM was formally published, and
disseminated to a wide range of actors, including (and especially to) sector agencies,
parliament, development partners, and civil society.
The RM, following the format of the national plan, contains nine chapters, each one
corresponding to the nine priority sectors of the government: macroeconomy, industry and
services, agriculture and fisheries, infrastructure, financial, good governance and rule of law,
social development, peace and security, and environment and natural resources (NEDA,
2011a). Each of the nine chapters contains statements of objectives, illustrated through an
objective tree. These results statements were based on what is called the ‗strategic
framework‘ of sectors, decided upon by members of their respective planning committees
and subcommittees, composed of officials from line ministries, as well as other stakeholders
within the same sector.
Further, each chapter or sector has a list of indicators for the various levels of
development results (goal, sector and sub-sector outcomes) expected to be delivered by 2016,
with corresponding baseline information, end-of-plan targets and responsible agencies, all
contained in a matrix format (NEDA, 2011a). The indicators were recently revalidated in
Page 31
25
2013, and progress of the achievement towards targets were also recorded, and once again
captured in an updated RM volume .
Figure 1. Objective Tree and Indicator Matrix of the Social Development Sector in the
PDP 2011-2016
Source: NEDA, 2011a
According to the Global Fund (2011), a national M&E system should have an M&E
plan, which is typically composed of the following sections: (1) M&E coordination
mechanisms – to describe functions of stakeholders and partners and composition of working
groups; (2) indicator measurement framework – to present a list of indicators, baseline
values, targets, data collection methods, frequency of data collection, and person and agency
responsible for data collection; (3) data collection strategies – to describe data collection and
reporting tools, including frequency and timeline; (4) data management – to outline how data
and reports will be managed (i.e. data management infrastructure/systems); (5) data quality
assurance mechanisms – to describe the system that will ensure the quality of data collected
and reported; (6) program review, evaluation and surveys – to enumerate practices, plans and
schedules for conducting program reviews, evaluations and surveys; (7) human resource
capacity building – to describe the M&E human resource capacity at the period the M&E
plan was prepared; and, (8) costed M&E work plan – to outline costs related to the
implementation of M&E activities and identify technical assistance needs for conducting
M&E.
The Philippines, currently, only has the second element – indicator measurement
framework – explicitly expressed through the RM. This is not to say that the other elements
are completely non-existent; they may be found in separate documents (e.g. memorandum or
guidelines), or are conducted as part of other government functions. For example, M&E
Page 32
26
coordination mechanisms should follow the structure of the planning process – monitoring of
target achievements should be done within sector planning committees. This is indicated in
another document, called RM Guidelines (issued by NEDA, but not formally published and
disseminated), which also contains other elements such as criteria for selection of indicators,
and in lesser detail, data collection strategies and data management. Further, on the issue of
M&E costs, the budget for the conduct of RM-related activities (e.g. coordination meetings,
indicator setting workshops, publication of the RM volume), was tucked within the budget of
NEDA as lead secretariat and facilitator of the Plan process. There was also no budget
explicitly allocated for follow-up M&E activities such as program reviews and evaluations.
The conception of the RM certainly illustrates the government‘s commitment to
results-orientation, as advocated by several international agreements the Philippine
government has pledged to uphold. It was clearly in direct response to the challenge in the
Paris Declaration (specifically, indicator 11) for: ―partner countries endeavour to establish
results-oriented reporting and assessment frameworks that monitor progress against key
dimensions of the national and sector development strategies and that these frameworks
should track a manageable number of indicators for which data are cost-effectively available‖
(OECD, 2005:7).
However, the government focused most of their efforts in the identification of results
objectives and indicators, without necessarily thinking up and putting in one comprehensive
document a clear strategy that could guide the conduct M&E activities related to the national
plan. In a way, the RM is only an initial step towards achieving a bigger results agenda, for
instance, it paved the way for better linkages in planning and budgeting (see section M&E
linkage to the budget process).
A.2 Organization and Coordination Mechanisms
NEDA, the country‘s lead socioeconomic planning agency, is assigned the
responsibility of overseeing the preparation of the Philippine Development Plan (also known
previously as the Medium Term Philippine Development Plan or MTPDP). Memorandum
Circular No. 3, s 2010 formalized this role, by mandating NEDA to ―coordinate the
preparation of the [PDP 2011-2016]‖ (Malacanang, 2010:1). In addition, it enjoins all of
government to participate in the planning process: ―[the PDP is] to be jointly formulated by
the executive and legislative branches of the government in a participative process involving
Page 33
27
the other sectors of the Philippine society – the marginalized sectors and communities, and
private organizations‖ (Malacanang, 2010:1). It adds that: ―All government departments,
offices and instrumentalities, including government-owned and controlled corporations, shall
formulate their respective medium-term development plans and action programs. These plans
and programs shall have results-oriented focus on national development goals‖ (Malacanang,
2010:1).
As planning and M&E are twin functions, and the M&E of the PDP follows the
arrangements of the planning process, the role of overseeing the monitoring of the progress of
the national plan was also assigned to NEDA, with the Monitoring and Evaluation Staff
(NEDA-MES) largely spearheading the RM process. The main coordination mechanism is
between NEDA (as lead secretariat) and Planning Committees (PC) and their subcommittees.
Chairmanship of the PC and subcommittees is usually given to a high-ranking government
official, usually at the level of deputy minister (NEDA, 2011c).
Table 1. PDP 2011-2016 Planning Committees (PC) and Subcommittees and their Leadership PC1 PC2 PC3 PC4 PC5
Chair: National Economic
and Development Authority
Chair: National
Economic and
Development Authority
Chair: Department of
Finance
Chair: Department of
Social Welfare and
Development
Chair:
Department of
Environment and
Natural Resources
Co-Chair: Department
of Interior and Local
Government
Subcommittees: Subcommittees: Subcommittees: Subcommittees:
Macroeconomy
Chair: Philippine Institute of
Development Studies
Transport
Chair: Department of
Transportation and
Communication
Financial Sector
Chair: Central Bank
Health, Nutrition and
Population
Management
Chair: Department of
Health
Industry and Services
Chair: Department of Trade
and Industry
Co-chair: National
Competitiveness Council
Water
Chair: Department of
Public Works and
Highways
Good Governance
and Rule of Law
Chair: Civil Service
Commission
Education, Training
and Culture
Chair: Department of
Education
Modern and Competitive Agriculture and Fisheries
Chair: Department of
Agriculture
Co-Chair: Department of
Agrarian Reform
Energy
Chair: Department of
Energy
Housing and Urban
Development
Chair: Housing and
Urban Development
Council
Communications
Chair: Commission on
Information and
Communications
Technology
Social Protection
Chair: Department of
Social Welfare and
Development
Social Infrastructure
Peace and Security
Chair: Office of the
Presidential Adviser on
Peace Process
Source: NEDA, 2011c
As seen in Table 1, government agencies (departments, councils and commissions)
take the leadership in each of the PCs and subcommittees. However, each PC is also
Page 34
28
composed of members from legislative bodies, non-government organizations and people‘s
organizations, private sector, and academic institutions.
Again, as the M&E of the plan follows the arrangements of the planning process, the
preparation of the indicator matrices was done for each of the nine chapters through the PCs
and subcommittees, with a set of NEDA-issued RM preparation guidelines to steer the
process. To decide on the sector results objectives (sector outcomes and sub-sector outcomes)
and identify appropriate indicators, baselines and targets for these objectives, the PCs or sub-
committees were given the responsibility to organize a series of national and regional
consultations with stakeholders in their respective sectors. The national statistics agency was
actively at hand during the whole RM preparation process, to provide information especially
related to baselines and targets.
Admittedly, this is the first time that a concrete results framework was implemented
alongside the PDP, but as NEDA has traditionally taken the leadership in the planning
process, it was able to smoothly mobilize and coordinate with line ministries to be involved
in the RM process, especially in the determination of priority indicators for their sectors.
Building an M&E system for the national plan, like the planning process itself, is essentially
a cross-cutting activity requiring the participation of the widest range of stakeholders both in
and outside of government. It requires a high level of coordination, which should be led by a
competent unit in the bureaucracy (Mackay, 2007).
These organization and coordination mechanisms discussed above, however, only
have to do with the design process involved at the preparation stage of the RM, but does not
really answer the question on how actively these stakeholders were able (or have the
opportunity) to be involved with actual M&E work.
In 2013, the PCs and subcommittees once again assembled in a series of meetings and
consultations to (a) revalidate the indicators of the RM (i.e. confirm whether they indeed
contribute to sector objectives and societal goal) and (b) update the matrices to reflect 2011-
2012 accomplishments vis-à-vis 2011-2012 targets. This function, however, is obviously
quite limited. It is as if M&E has been reduced to a task of simply updating statistical tables.
With the absence of a comprehensive M&E plan for the PDP, it is truly unclear how line
agencies, but especially the Parliament, civil society, and even the statistical agencies are
supposed to participate in a wider scope of M&E activities for the plan, for example, in
Page 35
29
program review or evaluations. While these, of course, were already being conducted in
various sub-sectors and for many programs even before the RM was conceptualized, their
outputs remain for use of their respective agencies and subsectors, and little effort was made
to assess these findings in terms of their contributions to higher level results (NEDA, 2013a).
In principle, the RM, providing a clearer picture of the causality chain per sector, aimed to
change these practices, so all M&E activities, at least those starting at the sector level, are
systematically done, and information pooled for analysis.
Thus, without an M&E plan, it is also uncertain how individual M&E arrangements of
individual line ministries could relate to the overall national M&E system of the plan. This is
highly consistent with the results of the 2011 PD survey on the Philippine government‘s
results framework. According to said survey, the government faces the challenge of lack of
capacity to measure and monitor results at all levels, as well as difficulties in establishing
linkages and attributing the contribution of program and project outputs to subsector and
sector level results (OECD, 2011).
As we can see, the results agenda of the Philippine government only flourished as of
late. Looking at the evolution of Philippine M&E landscape, for the longest time, the focus of
initiatives was output-level monitoring, and by the mid-2000s, this shifted to a more sector-
based approach (NEDA, 2011b). After the endorsement of the Paris Declaration in 2005,
concepts such as ‗country-led evaluation‘ and ‗national M&E systems‘ came to the fore, and
became the language used by many donors, including the United Nations (UN), who is
considered to be one of the longest-standing development partners of the Philippines. The
government took notice and pledged to act, but it was only in 2010, in light of a new
presidency that was very supportive of a results agenda, that results-based initiatives took
shape, the highlight of which, was the RM.
A.3. Capacity Building
Philippine government‘s M&E capacity building issues encompasses lack of
knowledge, skills and tools to perform evaluations on its own, as well as consequent
budgetary and manpower constraints.
Today, demand for evaluation, as well as evaluation results is certainly high, but the
government‘s capacity to perform evaluations is characterized by a lack of appropriate skill
sets and tools. A national evaluation policy is proposed by the government to set evaluation
Page 36
30
standards and provide the groundwork and harmonize the various yet fragmented M&E
initiatives within the Philippine government.
NEDA did initiate the drafting of a national evaluation policy in 2012, but as of mid-
2014, it is still in the final stages of preparation, and has yet to be implemented. The policy is
also envisioned to set the methodological standards and provide guidance in the conduct of
various evaluation processes and activities of the government (for both ODA and locally
funded programs and projects) (NEDA, 2013b).
Segone (2010) emphasizes the need for a country-led national evaluation policy, ―in
which the country (not the donors) leads and owns the monitoring and evaluation process‖
(Segone, 2010:26). A policy would determine the following: what elements of projects will
be evaluated; what questions will be asked; what methods will be used; what analytical
approach will be selected; and, how the findings will be communicated and used (Segone,
2010). He also emphasizes that being country-led, the evaluation policy can build on the
strengths and capacity of current M&E systems, as well as on the values, culture and political
processes of the country (Segone, 2010).
A national evaluation policy may also be able to address the issue of resource (budget
and manpower) constraints related to the performance of evaluation activities. For example,
with the issue of manpower, an evaluation policy can address the following: Who will
perform evaluations? Will it be NEDA (as it is the lead M&E agency of the government)?
These questions are important as they will determine whether there might be a need to pile
evaluation responsibilities on top of the workload of existing staff, reorganize NEDA
completely, or hire new personnel to add to the current staff complement. These, of course,
also have budgetary implications. An evaluation policy can assure availability of budgets
exclusively for evaluations, but in the current state, they still have to be taken from other
sources such as technical assistance funds from donors.
While the Philippine government is still hard at work in figuring out its evaluation
capacity agenda, other (more general) M&E capacity building initiatives were executed in
recent years, all with technical assistance from donors. One example of this donor-funded
capacity building project was an initiative conducted from 2011 to 2013, through the
technical assistance provided by the International Fund for Agricultural Development (IFAD)
to the government of the Philippines. The technical assistance was called Institutional
Page 37
31
Strengthening of Results-Based Monitoring and Evaluation (RBME), and aimed to strengthen
and institutionalize capacities of government officials involved in project implementation and
monitoring and evaluation for results through a series of RBME trainings. However,
participation to the trainings was limited to the NEDA Central and Regional Offices and
implementing agencies of projects funded by IFAD (NEDA, 2013c). As in most trainings,
there is also no follow-on activity to this technical assistance, to check and evaluate whether
participants were able to apply their learning in practice.
Another bright opportunity for M&E capacity building in the Philippines can be built
upon the M&E Network Philippines, launched in 2010 with the financial support of donors
such as UN and later, the Asian Development Bank (ADB). The Network (with NEDA as
lead secretariat) was convened to provide a venue for sharing of M&E knowledge and
experiences. Aside from this, it was foreseen as a system for collection, dissemination and
discussion of M&E findings on particular themes (NEDA, 2011d). Three successful Network
forums had already been organized in the last three years, but without the support from
donors, the Network really has no resources to mobilize other activities. Thus, its potential as
a platform to strengthen M&E practices has yet to be fully realized.
While M&E capacity development activities may be present, they may not necessarily
work or sustainable. According to Segone (2010), most capacity development initiatives
focus on only on strengthening individual capacities. When individuals participate in capacity
development, they must be able to echo their newly acquired knowledge and skills back to
the organization. In practice, however, this rarely happens.
Segone (2010) argues that the key to a successful M&E capacity development
program is to take a systematic approach that includes not only individuals, but institutional
and external enabling environment components as well. The most critical, however, is
defining a strategy for capacity development at the institutional level that strengthens the
evaluation culture of the organization. When this is established, individuals also benefit. They
are better learners, use evidence to inform their actions, and develop a greater sense of
accountability (Segone, 2010). Capacity development targeted at the institutional level also
―increases efficiency and effectiveness by systematically using lessons learned to improve
programmes and policies‖ (Segone, 2010: 36). Then again, Segone also points out that an
evaluation policy is first required in order to carry this strategy out, and the Philippine
government has yet to finalize and implement its policy.
Page 38
32
B. Demand and Use Dimensions
B.1 Outputs and Dissemination
Progress in the achievement of the indicator targets contained in the PDP-RM was
reported through the 2011-2012 Socioeconomic Report (SER) and in the Revalidated Results
Matrices document. The SER presents outcomes achieved in the first two years of the current
presidency, based on the targets set out in the PDP and RM.
Further, in 2013, the government had also undergone the process of revalidating and
reviewing outcome statements and indicators of the existing RM. The output of this
revalidation exercise was a new volume of the RM, where the (updated) indicator matrices
now include a reporting of 2011-2012 actual accomplishments vis-à-vis their corresponding
targets (NEDA, 2014). As the SER served as the basis for these updates, the two are basically
reporting the same set of information. They are disseminated to a wide range of stakeholders,
including line ministries, Parliament, civil society and development partners (see B.3 to B.5).
Upon their request, multiple audiences may be given a different version of the report, based
on their information needs. For instance, a member of the Congress might request for a
briefing on the progress of indicators within the RM, so NEDA will prepare a briefing note or
presentation, specifically suited to his/her information needs. Then again, this is purely ad
hoc work, and as it is, NEDA only produces and disseminates a standard format of the SER.
Kusek and Rist (2004) remind suppliers of M&E information to take into
consideration the different interests, preferences and functions of varying consumers of
reports and findings. Audiences also look for different things based on their functions, and
M&E practitioners must have an insight into the variability of the needs of their audiences
(Kusek and Rist, 2004).
While the SER and revalidated RMs are important outputs, they contain mostly
descriptions, and lack analytical content; they simply are monitoring outputs. According to
Bedi, et al. (2006), a report, if well-presented and disseminated widely, becomes a key
instrument that allows stakeholders to be informed about government‘s performance, but it
also supports accountability relationships especially between the public and its government.
NEDA, who produces both reports, needs to view publication of documents such as
the SER and RM not only as part of a requirement and their routine work. Suppliers of M&E
Page 39
33
information and outputs must work harder to ensure that stakeholders are able to access them,
have a clear understanding and appreciation and, most importantly use these reports to
support their different functions (Mackay, 2007).
B.2 M&E Linkage to the Budget Process
Meanwhile, in terms of the establishing the linkage between planning, M&E and
budgeting, it is NEDA which coordinates with the Department of Budget and Management
and implementing agencies to ensure that the alignment among the three processes is
enhanced. The figure below shows the Philippine Government‘s Public Sector Management
Cycle, and the processes and tools used to carry out each function. In this figure, we see how
the RM, indicated as both a planning and M&E tool, is envisioned to play a very active role
in establishing linkages and ultimately, achieving results.
Figure 2. Public Sector Management Cycle of the Philippine Government
Source: NEDA, 2011a
According to NEDA (2011a), the government‘s results framework still exhibits
fragmentation due to several reasons such as weak coordination among agencies, reliance on
donor-led systems, and institutional and capacity constraints. Recognizing these weaknesses,
the government embarked on an initiative to tighten its results framework.
M&E-budget linkaging began with efforts to establish the link between the
Organizational Performance Indicator Framework (OPIF), DBM‘s tool to measure agency
performance, and the indicators in the RM. This was done through a series of workshops
between NEDA, DBM and 10 pilot sector agencies, with technical assistance from Asian
Page 40
34
Development Bank (NEDA, 2012). Agencies must be able to come up with an output-
outcome structure, which would not only present an agency‘s outputs and organizational
outcomes, but also identify to which sector outcomes they would contribute to. The ability of
the department or agency to deliver outputs and outcomes, and to show their contribution to
higher level results will then be the basis of their budgets. The output-outcome structure must
resemble the figure below.
Figure 3. Output-Outcome Structure
Source: DBM, 2014b
Another way that the current government uses M&E information from the RM
process is through Performance Informed Budgeting (PIB), implemented under Executive
Order No. 80, s.2012 (Directing the Adoption of a Performance-Based Incentive System for
Government Employees). The EO assigns the DBM as lead agency for this process. DBM
(2014b) defines PIB as a ―set of integrated processes that aim to improve the efficiency and
effectiveness of public expenditure by linking the funding to the results, making systematic
use of performance information, although not solely, in resource allocation and management‖
(DBM, 2014b:3). The rationale behind the implementation of PIB is to ―motivate higher
performance and greater accountability in the public sector and ensure the accomplishment of
commitments and targets under […] the Philippine Development Plan (PDP) 2011-2016)‖
(Malacanang, 2012:1).
According to DBM (2014b), the PIB in itself is a mechanism that facilitates increased
use of M&E information at different levels. At the department level, performance information
produced at the organizational outcome level will allow them to determine whether they are
Page 41
35
achieving their planned results. Performance information also feeds into the preparation of
the National Expenditure Program, which is one of the main documents submitted to
Congress for budget approval. Thus, Congress can make a more informed decision about
which departments and agencies have done well in delivering their outcomes (DBM, 2014b).
Information generated from PIB will also assist NEDA in determining what outcomes
or results departments and agencies are actually contributing to the sectoral and subsectoral
outcomes of the RM. Because outputs in the OPIF and outcomes in the RMs were not
identified at the same time, and thus are not necessarily linked, in the current monitoring of
the RM, NEDA can only infer attribution to departmental activities and projects, but does not
really have direct evidence that outputs are actually advancing the higher level outcomes
contained in the PDP (DBM, 2014b).
PIB information is also expected to help departments better align to the President‘s
Budget Priorities Framework, which encompasses the four priority objectives (inclusive
growth, sustaining growth momentum, good governance, and managing disaster risks)
deemed by the current administration as areas needing more resources in its remaining term
(2015 to 2016). The PIB will also keep the President and his Cabinet officials informed about
department performance, by feeding into the Office of the President‘s Planning Tool
Commitments (yet another M&E tool that‘s apart from RM, for use of the Cabinet Secretaries
who are directly accountable to the President) (DBM, 2014b).
PIB requires that both NEDA and DBM perform their individual functions well, and
at the same time keep close coordination with one another. Robinson and Last (2009) remind
governments that only if reliable and timely M&E information about the results being
delivered is available will it be possible to make performance-informed budget decisions.
NEDA, responsible for gathering and consolidating M&E information, should ensure that
data is easily accessible for budget processes. But availability of performance information is
not sufficient in PIB, they should actually be utilized (Robinson and Last, 2009). It is then
DBM‘s role to ensure that the proper mechanisms are in place in order to make significant
use of all these performance data.
As evidenced by all these initiatives related to the use of information in the Philippine
budget process, though not perfect, is relatively the most developed element of the M&E
demand side. Again, all these initiatives coincided with a new presidency, whose
Page 42
36
administration was committed to putting results at the center of its public sector functions
(NEDA, 2011c). Taking advantage of such an enabling environment, and with adequate
support (in the form of technical assistance funds) from like-minded donors, DBM and
NEDA really stepped up to the plate in order to carry out these initiatives. As oversight
agencies holding relative influence in the bureaucracy, they were able to generate buy-in and
a solid stream support from different departments. And, with the PIB, a budgeting system
based on providing incentives, the DBM was able to compel departments to start being more
mindful of their contributions to higher societal objectives, but just as important, to begin
demanding and using M&E information to understand and improve their performance.
B.3 Use of M&E Information and Outputs by Parliament
The parliament (Philippine Congress) has little involvement in the M&E of the
national plan. As part of planning committees, they have a say in the selection of priority
strategies, as well as their corresponding indicators.
They are of course actively involved in the budget process, which makes them users
of M&E information, although not specifically the information produced by the plan M&E
system. As mentioned earlier, the PIB provides a new mechanism to facilitate Congress‘ use
of performance information, aiding them during the budget approval process. In addition, it
was learned that within the Philippine parliament is the Congressional Policy and Budget
Research Department (CPBRD), whose mandate is to provide technical information on
social, economic and institutional policy issues to members of Congress. One of its other
relevant mandates is to provide analysis of the Philippine Development Plan (CPBRD, 2014).
That the CPBRD exists and performs this function confirms that there is a window of
opportunity for greater use of M&E information and outputs within the Parliament.
Parliament‘s lack of use of outputs, as well as their limited participation may have to
do with their ―lack of understanding of M&E systems and the opportunities these systems
present for parliamentary engagement‖ (Bedi et al., 2006:48). When parliament fails to make
use of M&E information for policymaking, this could be considered a missed occasion for
members to perform their roles of oversight, control of the executive, and even their
representation of their constituencies. On the part of government, units responsible for M&E
must identify groups within parliament that may have special needs for M&E information
Page 43
37
(Bedi, et al., 2006). In the case of the Philippines, NEDA could engage with CPBD, for
example, in joint analysis of some of the findings contained in the SER.
Further, for M&E information and outputs to be useful and have an impact, M&E
systems must develop outputs that suit the needs of Parliament, and timed correctly to be
integrated into planning and budgeting cycles. Data also need to be presented in both
technical and nontechnical manners, so lawmakers would find them accessible (Bedi, et al,
2006).
B.4 Use of M&E Information and Outputs by Civil Society
As in the case of Parliament, civil society participation in the plan M&E system, and
their use of M&E information and outputs are limited.
The current administration has expressed openness towards expanding partnerships
with civil society in areas such as participatory audit and budget transparency. Civil society
groups are quick to take on this opportunity, but always find that the government does not
find much of a value in their participation (CODE-NGO, 2010).
For this planning cycle, selected members of civil society were part of planning
committees. This means that they share the responsibility, and in fact, have been consistently
active in putting forward strategies and indicators that would best support the interests of
their constituencies. The Caucus of Development NGO Networks (CODE-NGO) was one
group that was invited to represent civil society in the planning committees. CODE-NGO
(2011) noted that while the government included them in the PDP process, a significant
number of their recommendations, although noted, did not make it to the final version of the
plan.
They also do not have involvement in government‘s M&E practices or influence over
how processes are carried out. Also, whether they use (or the extent of their use) of the PDP‘s
M&E information and outputs is also a topic that remains to be undocumented by the
government. What is known, however, is that civil society groups conduct their own
independent assessment of the national plan. CODE-NGO, for example, released an
assessment report of the 2004-2010 PDP (CODE-NGO, 2011).
Page 44
38
According to a mapping and assessment study of civil society in the Philippines
(Tuaño 2011), civil society participation in government processes in the Philippines is
limited. On the part of the government, efforts have been made to provide space for
institutionalized participation at the national level, for instance, allowing civil society
memberships in commissions and councils such as the National Anti-Poverty Commission
and Commission on the Role of Filipino Women (Tuaño, 2011). The author cites other
factors for weak participation: inadequate capacity for engagement with the government on a
sustained basis, lingering distrust of civil society groups, non-government organizations
(NGOs) and people‘s organizations (POs) of the government, and hostility of some
government officials towards NGOs. Further, lack of capacity in M&E and analysis, as well
as financial constraints also deter civil society from playing a more active role in government
as they desire, for example, in the M&E of donor-funded programs and projects. (Tuaño,
2011).
Valadez and Bamberger (1994) confirm the above statements: civil society has always
been known to be generally disapproving of governments and their projects, ―believing them
to be expensive, influenced by political considerations, or unavailable to many of the groups
that most need them‖ (Valadez and Bamberger, 1994:429). If they are to be involved in
government processes such as M&E activities, they would not wish to lose independence of
action, and be ―bogged down by bureaucratic procedures‖ (Valadez and Bamberger,
1994:429). The government also remains close to civil society groups, as it only expects
criticisms and negative evaluation for its work (Valadez and Bamberger, 1994).
B.5 Use of M&E Information and Outputs by Development Partners
Out of the many stakeholders, donors‘ demand for M&E information produced by the
government is highest. Major development partners such as the Asian Development Bank
(ADB), World Bank (WB), as well as the UN have taken a keen interest especially on the
RM, and have used it mainly to align their strategies to those of the Philippines‘.
ADB, for example, conducted several consultations with the Philippine government,
through NEDA, to align its Country Partnership Strategy 2011-2016 and its Country
Operations Business Plan 2015-2017 with the priorities indicated in both the PDP and RM
(ADB, 2011). Meanwhile, WB was involved with a similar initiative for its own newly-
released Country Partnership Strategy for the Philippines 2015-2018. WB‘s overall country
Page 45
39
strategy supports the country‘s goals to promote inclusive growth, poverty reduction and job
creation (World Bank, 2014b). The UN has also engaged the government with an alignment
process of its United Nations Development Assistance Framework (UNDAF) 2012-2018,
going as far as creating their own results matrices, which is completely based on the PDP-RM
(UN, 2011).
These initiatives show donors‘ adherence to the principles of the Paris Declaration, in
this case, the principle of alignment (OECD, 2005). On the one hand, while these alignment
initiatives are commendable, they also illustrate uncoordinated efforts among donors, which
is somehow inconsistent with the principle of harmonization. Even so, alignment of donors‘
strategies with the country‘s priorities also bodes well in aligning donor and country M&E
processes, thereby lessening duplication of functions and administrative burden for the
government.
Page 46
40
7 Conclusions and Recommendations
The landscape of M&E in the Philippine government continues to evolve, and has
been marked by a number of improvements in the last decade. Progress, specifically at the
national level, has been greatly influenced by the principles and practices advocated by the
international community in this new era of aid reforms – ownership, alignment,
harmonization, accountability and managing for results. And, with such strong emphasis on
results in the field of development, the Philippine government is now more than ever relying
on M&E as an indispensible tool to strengthen their government processes.
In the last four years, the government has set out on a results agenda that aimed to
incorporate results into all areas of public sector management, the highlight initiative of
which is the introduction of a results framework to the national plan. The Results Matrices
was created as an M&E system (although primarily a monitoring mechanism) to the
Philippine Development Plan 2011-2016. The design, as well as the implementation of this
M&E system, was beset by many challenges since its inception in 2010.
According to Bedi, et al. (2006), to surpass practical challenges, it is helpful to see
M&E as having both supply and demand sides. Any initiative seeking to improve an M&E
system should, however, focus on the institutional instead of the technical dimensions of
M&E, that is, designing and implementing a coherent M&E system (i.e. supply side), and
ensuring the use of M&E information and outputs to inform policy and decision making (i.e.
demand side).
This study, thus, sought to determine the strengths and weaknesses of the M&E
system of the current Philippine Development Plan (PDP), through a diagnosis of both its
supply and demand dimensions.
In terms of supply, a diagnosis was made for three dimensions: plan and policy;
organization and coordination mechanisms; and, capacity building.
First, it has been found that the M&E system of the PDP lacks a comprehensive M&E
plan to guide its design and implementation. Out of the many components of a typical M&E
plan, the only element made available, formally written and disseminated by the government,
was an indicator measurement framework, which is contained in the RM. The absence of a
Page 47
41
comprehensive M&E plan did not have much impact on the coordination aspect of the design
of the RMs, as this already follows the existing set up of the planning process. Without the
M&E plan, however, there was not enough guidance in how data collection and data
management, and even budgets (for related M&E activities) should be executed. It was also a
missed opportunity to create a dissemination strategy, which would have helped raising the
awareness of a wider range of stakeholder about the availability of M&E information. As we
have learned in this study, dissemination is a key strategy to creating demand and use for
M&E.
Coordination between government agencies during the design stages of the RM is
seen to be one of the strengths of the supply side, as it greatly benefited from the existing,
quite organized coordination set up of the planning process. Furthermore, NEDA, who has
been traditionally the overseer of the crafting of the national plan, was also able to exercise
its influence and generate buy-in and support of line ministries for this M&E initiative.
Capacity, is altogether a different story, as it is one of the weakest elements of the
supply side of the M&E system, specifically in terms of the government‘s capacity to conduct
evaluations. While the current RM process calls for greater exercise in the government‘s
monitoring function (where it is admittedly more advanced), evaluation is still seen to be
relevant in its overarching results agenda. A national evaluation policy, whose aim is to guide
all evaluation activities in the country, was proposed in 2012, but has yet to take effect. A
policy would create an enabling environment for setting standards and harmonizing all M&E
initiatives within the government, thereby also contributing to strengthening the supply side
of M&E.
As emphasized many times in the paper, a national M&E system, its design process,
as well as the information it produces may be sound, but if they are not used, they do not hold
value. Authors (Mackay, 2007; Kusek and Rist, 2004) have put forward that the ultimate
measure of success of an M&E system is its utilization to support government‘s core
functions. This is the demand side of M&E.
A diagnosis was then undertaken for five demand side dimensions: outputs and
dissemination; M&E linkage to the budget process; use of M&E information and outputs by
Parliament; use of M&E information and outputs by civil society; and, use of M&E
information and outputs by development partners.
Page 48
42
First, the outputs and dissemination dimension of the M&E system is found to be
relatively weak. The main output of the M&E process is the Socioeconomic Report, which
provides a comprehensive status of the progress of the plan strategies and achievement of
indicator targets. NEDA produces the SER in one standard format, as has been traditionally
done. NEDA also disseminates the SER to all those who have been involved in the planning
process (including the general public by posting it on its website). This kind of dissemination
strategy clearly does not encourage the use of M&E information, as evidenced by the limited
usage in the part of Parliament and civil society. Here, we see how a weakness in the supply
side could cause weak demand, in that stakeholders do not seem to see the value of obtaining
M&E information and use them for their own functions. Mackay (2007) noted this situation
as a chicken-and-egg problem: there is lack of demand for M&E because of a lack of
understanding of what M&E can provide; understanding is limited because of a lack of
experience in M&E; and lack of experience in M&E is caused by weak demand.
Even so, if the government, particularly its central M&E unit is truly committed to
pursuing its results agenda, it will use its position in the supply side of M&E to ensure that
whatever M&E information and outputs it provides is presented in a clear, understandable
manner. By presenting better outputs and making them accessible especially to stakeholders
with significant interests in M&E information, the government can increase demand and use
of M&E to support functions such as budgeting and policy making.
Meanwhile, the ‗linkage to the budget process‘ is the most noteworthy dimension in
the demand side. The government‘s performance-informed budgeting (PIB) initiative
illustrates how even one working element in the supply side (i.e. well-organized coordination
structure between the planning and budget offices) holds the potential to create a space for
greater utilization of M&E information. Whether the PIB system will be effective remains to
be seen, but if implemented right, the PIB will be a good illustration of how M&E, when
essentially made use of, can improve government systems and the way policy making is
done.
In its current state, the M&E system of the Philippine Development Plan, certainly
has both own strengths and weaknesses. If improvements are to be made, the government
must examine in an in-depth manner both the supply and demand sides of.
The researcher, thus, proposes the following recommendations.
Page 49
43
A comprehensive M&E plan should be formulated before the next M&E system of
the national planning is rolled out and implemented. If it is also not too late for the existing
M&E system, then an M&E plan should also be crafted covering the remaining two years
(2015-2016). What can be highlighted is a strategy to conduct program reviews, and if
possible, evaluation activities. These will especially be important to learn whether (selected)
strategies, expected to have produced results by 2016, were able to deliver.
An M&E plan will also address the challenges related to outputs and dissemination,
by including a set of guidelines on how to best present outputs and make them available to
the widest possible audience, thereby contributing to the creation of demand. And, if the
M&E plan is really to be done comprehensively, it can also look into creating mechanisms
for more meaningful participation of all relevant actors.
But perhaps, before crafting an M&E plan, it is proposed that the government engage
in a more comprehensive diagnosis or stocktaking of the current state of M&E, including the
systems of sector agencies and other non-government actors such as Parliament and civil
society. This could be led by NEDA, but as this is such a huge undertaking and resources
(manpower and budget) are definitely a constraint, it could be done jointly with willing
development partners. Through this diagnosis, the government can explore different
dimensions of both supply and demand sides and better understand the interaction between
the two, and can therefore prepare a more informed and substantial M&E plan.
If this exercise will not be able to benefit the M&E system of the current plan,
whatever can be gleaned from it will definitely be useful to improve the national planning
and M&E processes for the next administration.
Lastly, as evaluation is expressed to be high on the government‘s results agenda, it is
proposed that the government finalize its national evaluation policy. If decreed, a national
evaluation policy is expected to address a number of overarching issues in M&E discussed in
this study –budget, capacity-building, mechanisms for participation and utilization of M&E.
On its own, of course, a policy does not guarantee a successful M&E system. The
government is expected to play a crucial role in steering the policy and the proposals therein
towards an efficient and effective implementation.
Page 50
44
Bibliography
Asian Development Bank (2011) Country Partnership Strategy, Manila, ADB.
Bedi, T., et al (2006) Beyond the Numbers: Understanding the Institutions for Monitoring
Poverty Reduction Strategies, Washington, DC, The World Bank.
Caucus of Development NGOs Networks (2011) Assessment of the 2004-2010 Medium Term
Philippine Development Plan, Quezon City, CODE-NGO.
Caucus of Development NGOs Networks (2011) Some Gains, More Frustration: CSOs’
Participation in the Philippine Development Planning (PDP) Process, http://code-
ngo.org/home/component/content/article/43-front/237-some-gains-more-frustration-csos-
participation-in-the-philippine-development-planning-pdp-process.html (last consulted: 27
August 2014).
Commission on Audit (2013) 2012 Consolidated Audit Report on Official Development
Assistance Programs and Projects, Quezon City: COA.
Congressional Policy and Budget Department (2014) Brief History,
http://www.cpbo.gov.ph/index.php/2012-06-30-13-05-43 (last consulted: 27 August 2014).
Dale, R. (2004) Evaluating Development Programmes and Projects, 2nd
Edition, New Delhi,
Sage Publications.
Department of Budget and Management (2014) Mandate,
http://www.dbm.gov.ph/?page_id=343 (last consulted: 20 August 2014).
Department of Budget and Management (2014) Moving from Outputs to Outcomes: The
Continuing Evolution of PIB in the Philippines, Manila: DBM.
Edmunds, R. and Marchant, T. (2008) Official Statistics and Monitoring and Evaluation
Systems in Developing Countries: Friends or Foes?, Paris, PARIS21.
Executive Order No. 80 (2012) Malacanang, Manila.
Holvoet, N. and Inberg, L. (2012) ―Sector Monitoring and Evaluation Systems in the Context
of Changing Aid Modalities: The Case of Uganda‘s Health Sector‖, Study for DGD and BTC
in the context of the O*Platform Aid Effectiveness, Antwerp, Institute of Development
Policy and Management.
Holvoet, N. and Renard, R. (2006) ―Monitoring and evaluation under the PRSP: Solid rock or
quicksand‖, Evaluation and Program Planning, 30 (2007): 66-81.
Holvoet, N. and Rombouts, H. (2008) ―The Challenges of Monitoring and Evaluation Under
the New Aid Modalities: Experiences from Rwanda‖, Journal of Modern African Studies, 46
(4): 577-602.
Holvoet, N., Gidmelyn, M. and Inberg, L. (2012) ―Taking Stock of Monitoring and
Evaluation Arrangements in the Context of Poverty Reduction Strategy Papers: Evidence
from 20 Aid-Dependent Countries in Sub-Saharan Africa‖, Development Policy Review
30(6): 749-772.
Page 51
45
International Monetary Fund (2014) Factsheet Poverty Reduction Strategy Papers,
Washington, DC, International Monetary Fund.
Krause, P. (2012) ―M&E Systems and the Budget‖, in Lopez-Acevedo, G., Krause, P. and
Mackay, K. (eds.), Building Better Policies: The Nuts and Bolts of Monitoring and
Evaluation Systems, Washington DC, World Bank, 75-86.
Kusek, J. and Rist, R. (2004) A Handbook for Development Practitioners: Ten Steps to a
Results-Based Monitoring and Evaluation System, Washington DC, World Bank.
Mackay, K. (2007) How to Build M&E Systems to Support Better Government, Washington,
DC, The World Bank.
Memorandum Circular No. 3 (2010) Malacanang, Manila.
National Economic and Development Authority (2011) Philippine Development Plan 2011-
2016 Results Matrices, Pasig, NEDA.
National Economic and Development Authority (2011) ―M&E in the Philippines: Challenges
and Prospects‖, First M&E Network Forum, Conference Paper, Pasig, NEDA.
National Economic and Development Authority (2011) Philippine Development Plan 2011-
2016, Pasig, NEDA.
National Economic and Development Authority (2011) The Monitoring and Evaluation
Network Philippines, http://devplan.neda.gov.ph/m&e-network/ (last consulted: 26 August
2014)
National Economic and Development Authority (2012) CY 2011 ODA Portfolio Review
Report. Pasig, NEDA.
National Economic and Development Authority (2013) ―Tightening the Government‘s
Results Framework‖ Workshop on Results Based Monitoring and Evaluation, Workshop
Presentation, Pasig, NEDA.
National Economic and Development Authority (2013) ―Draft National Evaluation Policy‖
National Economic and Development Authority (2013) CY 2012 ODA Portfolio Review
Report. Pasig, NEDA.
National Economic and Development Authority (2014) Philippine Development Plan 2011-
2016 Revalidated Results Matrices, Pasig, NEDA.
OECD (2005) Paris Declaration on Aid Effectiveness. Paris, OECD/DAC
OECD (2008) 2008 Survey on Monitoring the Paris Declaration: Making Aid More Effective
By 2010. Paris, OECD/ DAC.
OECD (2008) Emerging Good Practice in Managing for Developmemt Results Source Book
3rd
Edition, Washington DC, OECD.
Page 52
46
OECD (2009) Supporting evaluation capacity development in partner countries – next steps
for the Network, Room Document, Paris, OECD/DAC.
OECD (2011) Aid Effectiveness 2011: Progress in Implementing the Paris Declaration –
Volume II Country Chapters Philippines. Paris, OECD/DAC.
OECD-DAC (2002) Glossary of Key Terms in Evaluation Results Based Management, Paris,
OECD.
Porter, S. and Goldman, I. (2011) ―A Growing Demand for Monitoring and Evaluation in
Africa‖, African Evaluation Journal, 1(1): 1-9.
Republic Act No. 8182: Official Development Assistance Act of 1996 (1996) Congress of the
Philippines, Metro Manila.
Robinson, M. and Last, D. (2009) A Basic Model of Performance-Based Budgeting,
Washington, DC, International Monetary Fund.
Segone, M. (2010) ―Moving from Policies to Results by Developing National Capacities for
Country-Led Monitoring and Evaluation Systems‖, in Segone, M. (ed), From policies to
results: developing capacities for country monitoring and evaluation systems, New York,
UNICEF, 22-43.
Stern, E. (2008) Thematic Study on the Paris Declaration, Aid Effectiveness and
Development Effectiveness, Koege, Ministry of Foreign Affairs of Denmark.
The Global Fund (2011) Monitoring and Evaluation Toolkit: HIV, Tuberculosis, Malaria and
Health and Community Systems Strengthening, Geneva, The Global Fund.
Tuaño, P. (2011) ―Philippine Non-Government Organizations (NGOs): Contributions,
Capacities, Challenges‖, in Jose, L. (ed.) Civil Society Organizations in the Philippines, A
Mapping and Strategic Assessment, Quezon City, Civil Society Research Institute, 9-46.
United Nations (2012) United Nations Development Assistance Framework 2012-2018,
Makati City, United Nations in the Philippines.
Valadez J. and Bamberger, M. (1994) Monitoring and Evaluating Social Programs in
Developing Countries: A Handbook for Policymakers, Managers and Researchers,
Washington, DC, The World Bank.
World Bank (2007) Results-Based National Development Strategies: Assessment and
Challenges Ahead, Washington, DC, World Bank.
World Bank (2014) Comprehensive Development Framework,
http://web.worldbank.org/WBSITE/EXTERNAL/PROJECTS/STRATEGIES/CDF/0,,pageP
K:60447~theSitePK:140576,00.html (last consulted: 18 August 2014).
World Bank (2014) Philippines Country Partnership Strategy, Manila, World Bank.
Page 53
47
Annex 1
Checklist/Guide Questions for Diagnostic Exercise
A. Supply Side
Plan and Policy
● Is there a single PRS monitoring strategy or master plan? What is its status? Is it being implemented?
Organization and Coordination Mechanisms
● Which mechanisms, such as committees or working groups, have been established to facilitate coordination
among agencies and stakeholders?
● Is their composition stable?
● Are various stakeholders represented at an appropriate level to reflect and ensure their commitment?
● Is there a functioning secretariat of the PRS monitoring system?
● Are the meetings organized in a way that supports coordination?
● Are the information flows adequate to support coordination?
● Is the burden on participants excessive?
Capacity Building
● Is there an overall capacity-building program or plan? Does it identify needs and gaps? Is it clearly
prioritized? Is it costed and funded?
● Are development partners key funders? What are their funding trends? How sustainable and predictable is
their funding? Are they supporting the overall system or only selected activities by certain actors? Is the
government providing guidance to development partners on supporting capacity development?
● Are development partners funding technical assistance in the design and strengthening of the PRS monitoring
system? Are skills being transferred to the country as a result of this assistance?
● Are substantive capacity-building efforts in monitoring, analysis, and evaluation currently under way in the
country? Are they directly related to the PRS monitoring system? Are they at the national, sectoral, or project
levels?
● How sustainable are the capacity-building efforts and the ability to retain the capacity created over the
medium to long term?
B. Demand Side
Outputs and Dissemination
● Is there a catalog of outputs? Does it include all the data and analytical products? Is it widely available and
updated regularly?
● Is there a calendar schedule of outputs? Is it advertised?
● Are outputs simultaneously released to all interested parties? Do all users have equal access?
● Are the sources, methods, and procedures related to the production of outputs published and available to all
users?
● Are the products available in various formats for users who have different levels of familiarity with and
literacy in the topics covered, different needs in terms of the depth of information, and so on?
● Is there a dissemination strategy? A communication strategy? Are selected actors in the monitoring system in
charge of these activities?
● Do systems exist to maintain and disseminate information? Are they user-friendly?
M&E Linkage to Budget
● Is there integration of M&E results in planning, M&E and budgeting?
Parliament
● Does the PRS monitoring system embrace a strategy for disseminating monitoring outputs on poverty to
parliament? Does the system provide for parliament as one of the users? Are the timing and form of outputs
appropriate to the needs of parliament?
● How does parliament use the information provided by the monitoring system, the finance ministry, or sector
ministries? Use it in formal hearings among parliamentary committees? In other ways?
● Does parliament communicate its data needs informally or formally through legislation requiring particular
information?
● Does parliament have the capacity to use monitoring information effectively?
Page 54
48
Civil society ● Are strong pressures exerted by civil society—the media, nongovernmental organizations, universities,
interdisciplinary research entities, and so on—on government for information about the performance of
government in reducing poverty?
● Does the PRS monitoring system have a strategy for disseminating monitoring outputs to the general public?
Are the timing and form of the outputs appropriate to the needs of the various audiences among the public?
● Is monitoring and evaluation information published widely in the media?
● Does civil society communicate its data needs formally to the PRS monitoring system?
Development partners ● What are the monitoring and reporting requirements of development partners?
● Are development partners using the PRS monitoring system for their own monitoring and reporting needs?
What other mechanisms are they using (other project and program monitoring systems, internal systems, and
so on)?
● Is the demand for monitoring and evaluation among development partners the main source of demand in the
country? If yes, is this because existing national capacity cannot serve development partners and domestic
clients at the same time or because there is little domestic demand?
● What is the impact of the demand by development partners on agencies that produce data and information?
● Have development partners coordinated their monitoring requirements?
Page 55
49
Annex 2
Diagnostic Tool: The Institutional Dimension of Monitoring Systems2
Institutional Context and Design of the PRS Monitoring System
What is the design of the existing PRS monitoring system? What is the institutional context surrounding the PRS monitoring
system? For example, what is the context in terms of coordination, leadership, legislation? In what ways does the
institutional context support the PRS monitoring system?
The design process for the PRS monitoring system
● Is there a single PRS monitoring strategy or master plan? What is its status? Is it being implemented?
● Did the design process include a diagnosis of existing monitoring arrangements? Were monitoring systems already in
place that could be used for the monitoring and analysis of progress in terms of PRS inputs, outputs, and outcomes? Were
these systems incorporated into the PRS monitoring system?
● Did the design process include a stakeholder analysis? Were existing and potential stakeholders of the PRS monitoring
system process identified?
● Did the design process include a needs assessment? Were the various stakeholders, including institutions, consulted about
their needs? How was this concern incorporated into the system?
● Did the design process include a data diagnostic? Were the various data needs for the PRS monitoring system mapped
out? What data sources existed? Were these incorporated into the system? How was this done?
● Was the design process participatory? Were stakeholders invited to participate in the process of designing the system? In
what ways did they help design the system?
Institutional leadership
● Does the government have a political commitment to the PRS monitoring system? Has there been explicit support at a
high political level? Are there champions actively making the case for a common monitoring system across the
administration?
● Which agency leads on the design, coordination, and implementation of the PRS monitoring system (for example, the
ministry of finance, the ministry of planning, the office of the prime minister, president, or vice-president)?
● Is the choice of locus of leadership conducive to providing actors with incentives to participate in the PRS monitoring
system (that is, close to the budget and planning processes)? Does it effectively play its role?
Coordination
Coordination mechanisms
● Which mechanisms, such as committees or working groups, have been established to facilitate coordination among
agencies and stakeholders?
● Is their composition stable?
● Are various stakeholders represented at an appropriate level to reflect and ensure their commitment?
● Is there a functioning secretariat of the PRS monitoring system?
● Are the meetings organized in a way that supports coordination?
● Are the information flows adequate to support coordination?
● Is the burden on participants excessive?
Oversight
● Is there a high-level body able to provide oversight and encourage compliance within government administration?
● How active is this body?
Liaison with local government
● Where this might be relevant, are regional and local governments rep-resented within the coordination mechanism of the
PRS monitoring system?
● Are local governments participating actively in the system? Do incentives support or hamper effective coordination?
● Is the institutional design of the system too elaborate for the capacities of local governments?
Liaison with line ministries
● How do liaisons with line ministries and other agencies function in the PRS monitoring system? How does the system
relate to the monitoring arrangements of line ministries?
2 Bedi, et al. (2006:59-72)
Page 56
50
● Do line ministries take the liaison function seriously? Do they participate actively in the monitoring system? Which
incentives support or hamper effective coordination?
● Is the requirement to monitor inscribed in the budgets of line ministries? Within the organizational structures of line
ministries? In the job descriptions issued by the ministries?
● Is the institutional design of the monitoring system too elaborate with respect to the capacities of line ministries?
Liaison with civil society
● Is civil society participating in the working groups and committees of the PRS monitoring system?
● Are these civil society groups participating actively in the system? Which incentives support or hamper effective
coordination?
● Is civil society represented in an appropriate manner? Who selects the civil society representatives?
● Have civil society organizations been adequately consulted about the roles they may wish to play? Are they able to fulfill
these roles?
Liaison with development partners
● Are development partners providing incentives and other encouragement to government agencies to use PRS monitoring
information?
● Are development partners using the PRS monitoring system?
● Are development partners supporting or crowding out national account-ability mechanisms?
● To what extent is the demand for monitoring data from development partners coordinated? To what extent is the demand
from development partners uncoordinated? What is the resulting impact on the functioning of the PRS monitoring system
and the related actors? Do the differing monitoring requirements of development partners contribute to a sense of
territoriality among government agencies and thereby discourage coordination?
Legislation and regulation
● Are the roles and responsibilities of various actors clearly set out? Is this supported by a legal framework? What is the
nature of this legal frame-work? Has the framework been implemented?
● Is the lead agency within the PRS monitoring system explicitly charged with the compilation and dissemination of the
outputs of the system?
● Is there legislation regulating the access to and dissemination of information and data in the country? Does it provide
incentives to disseminate information widely or does it restrict information flows? Are the data producers effectively
required to provide their information to other users within and outside government?
● Have quality standards been set for data?
Outputs and links to policy-making processes
● Are the outputs of the PRS monitoring system designed within a perspective on how they are to be used in policy
making? Have the relevant policy-making processes been mapped out? Have the entry points for system outputs been
identified? Have system activities been defined accordingly?
● Do mechanisms exist for consulting users within or outside government on the relevance of the outputs, emerging needs,
and priorities that the PRS monitoring system should address? Do these consultations influence the functioning of the
system? How?
● What are the institutional links between the PRS monitoring system and policy-making processes? Are outputs produced
in a timely fashion to affect particular events, including budget preparations, parliamentary hearings, planning sessions,
budget approvals, reporting, and so on? Are these links effective? Are there other channels through which the information
produced by the system may influence policy?
● Is there evidence that information produced by the PRS monitoring sys-tem has been used by the government during
various decision-making cycles such as for budgets, sectoral plans, investment planning, and so on? Is monitoring
information circulating beyond government and stimulating public debate on policy choices?
National statistics
● Is there a functioning national statistical system where various data producers may coordinate their activities, common
standards and principles are issued, and so on? Is there a national statistics institution? Is there a national statistical
master plan?
● How well are the PRS monitoring system and the national statistical system integrated? Are there overlaps between the
two systems? Potential rivalries and conflicts? Is the PRS monitoring system consistent with other plans and processes
for the development of the statistical system?
● What roles does the national statistics institution play in the PRS monitoring system? A standard-setting, technical-
assistance, or capacity-building role? Does the national statistics institution have the resources to fulfill its roles?
Ability of the PRS Monitoring System to Supply Information
Is the PRS monitoring system able to supply the data and analysis needed by users? Is the framework able to provide
Page 57
51
adequate resources for the monitoring processes?
Capacity for data production
Are data relevant to the elaboration and monitoring of the PRS generally available? Are data deficient in particular areas?
Where are the gaps?
Definition
● How are the data collection and computation activities of the agency determined?
● Are users and other experts and specialists consulted on issues, gaps, emerging needs, and priorities?
● Do the outcomes of these consultations influence the process of data collection and compilation and the work program?
Sources
● What are the main sources of the data? Administrative records? Budgets? Population censuses? Household surveys?
Others?
● Who is responsible for collecting and compiling the data?
Relevance
● What is the frequency or periodicity of data collection on particular issues (monthly, quarterly, annually)?
● What is the length of time between the reference period and the distribution and use of the data? Is this lag too long,
thereby limiting the uses of the data for decision making?
● What level of disaggregation is available (geographic, gender, socioeconomic status)?
Standards
● Do processes and procedures in data compilation adhere to professional and ethical standards?
● Is an agency, such as the national statistics institution, responsible for enforcing the standards? Does it effectively play
this role?
● Is the data consistent internally and with other data sets? Are there processes in place to check the accuracy and reliability
of the data?
● When discrepancies are found, are they investigated?
Coordination
● Are the data collection activities of the agency, its technical platform, its standards, and its definitions coordinated with
the other activities of the PRS monitoring system? In particular, how is the PRS monitoring system linked to the
monitoring units and other arrangements in line ministries? In local level agencies?
● Are there issues of incompatibility (differing definitions, systems, geo-graphic coverage, and so on)?
Manpower
● Does the agency have a dedicated monitoring unit?
● What is the capacity of the agency or the agency‘s monitoring unit in terms of the number and qualifications of the staff?
In terms of staff turnover?
● Are monitoring burdens excessive for the capacity of the agency or monitoring unit?
Resources
● What resources, including physical infrastructure, are available for the collection and compilation of monitoring data?
● To what extent is data gathering financed by external development partners? How sustainable and predictable are these
funds?
Dissemination
● Are the data understandable and clearly presented?
● Are the processes and procedures for data compilation transparent?
● Are the data published or otherwise available to the public? In what forms are they available? How are they
disseminated?
For public expenditure data
● Are systems in place to track poverty-related expenditures?
● How is the PRS monitoring system linked to the development of budgetary and public expenditure management systems?
● If accurate expenditure data are unavailable, are other techniques being used to monitor expenditure (such as public
Page 58
52
expenditure tracking surveys and public expenditure reviews)?
For regional government data
● What are the roles of central and subnational governments and agencies in monitoring decentralized services? What sorts
of data are collected by each actor?
● How are the data aggregated and analyzed? Who performs these functions?
● Are there multiple systems for monitoring and reporting? Are these systems compatible?
● Are there incentives to distort the data?
Capacity for analysis
● Which agencies and units inside and outside government are responsible for analyzing monitoring information (ministry
of finance, ministry of planning, local governments, local agencies, line ministries, the central bank, the national statistics
institute, civil society, development partners, universities, research centers, and so on)?
● What is their capacity? How are these agencies and units funded? Are the government agencies and units effectively
mandated and resourced? How reliable are the funding arrangements of the agencies and units?
● How is the work program of these agencies and units determined? Is there a mechanism to define activities in light of the
needs of the end users?
● What is the quality of this work? Are the analysts considered objective? Is the quality of the analysis limited by data
constraints? What is the level of the demand for the work of the analysts?
● Are the analysts able to communicate their analyses effectively to end users in an appropriately adapted format?
● What types of analyses (regular or one-off ) have been effectively produced? Are these sufficient to fulfill the needs of
system users? What are the gaps in analysis?
Capacity for evaluation
● What are the requirements and procedures for evaluating PRS pro-grams? Are the data and information gathered through
monitoring activities used to support evaluations?
● To what extent are evaluations and reviews undertaken or commissioned? What types of evaluations and reviews are
carried out? Expenditure tracking surveys? Participatory monitoring and evaluation? Rapid reviews? Impact evaluations?
Performance audits? How frequently are the evaluations and reviews performed? What is the quality of the output?
● Who are the main actors who undertake or commission the evaluations and reviews? Are these evaluations and reviews
undertaken on the actor‘s or agency‘s own initiative? To what extent do government ministries undertake or commission
evaluations and reviews of their own performance?
● Are evaluations and reviews that are commissioned by development partners the main source of this type of work in the
country? Are any of these evaluations and reviews conducted jointly with the government? If so, what is the level of
government input?
● Are evaluations and reviews commissioned by the government from civil society groups such as universities and
interdisciplinary research groups? Does civil society provide policy advice to the government during these evaluations
and reviews?
● Are the findings of evaluations reported? To whom are they reported? Parliament? Development partners? How are the
findings reported or published?
● Do any particular actors or agencies follow good practices?
Outputs and Dissemination
● Is there a catalog of outputs? Does it include all the data and analytical products? Is it widely available and updated
regularly?
● Is there a calendar schedule of outputs? Is it advertised?
● Are outputs simultaneously released to all interested parties? Do all users have equal access?
● Are the sources, methods, and procedures related to the production of outputs published and available to all users?
● Are the products available in various formats for users who have different levels of familiarity with and literacy in the
topics covered, different needs in terms of the depth of information, and so on?
● Is there a dissemination strategy? A communication strategy? Are selected actors in the monitoring system in charge of
these activities?
● Do systems exist to maintain and disseminate information? Are they user-friendly?
Capacity building
● Are specific budgetary resources allocated for PRS monitoring? For central activities (such as the secretariat)? For the
various components (for example, line ministries, universities, and so on)? Are the resources sufficient, and is the funding
predictable and sustainable?
● Is there financing for the sustained operation of data systems?
● Is there an overall capacity-building program or plan? Does it identify needs and gaps? Is it clearly prioritized? Is it
costed and funded?
● Are development partners key funders? What are their funding trends? How sustainable and predictable is their funding?
Page 59
53
Are they supporting the overall system or only selected activities by certain actors? Is the government providing guidance
to development partners on supporting capacity development?
● Are development partners funding technical assistance in the design and strengthening of the PRS monitoring system?
Are skills being transferred to the country as a result of this assistance?
● Are substantive capacity-building efforts in monitoring, analysis, and evaluation currently under way in the country? Are
they directly related to the PRS monitoring system? Are they at the national, sectoral, or project levels?
● How sustainable are the capacity-building efforts and the ability to retain the capacity created over the medium to long
term?
● Does the lead agency of the PRS monitoring system possess the required physical infrastructure to implement the
system? If not, is there a plan and resources to acquire this infrastructure? What is the potential for in-country universities
and other training organizations to provide training in data collection, monitoring, analysis, and evaluation to various
actors in the PRS monitoring system?
Demand for and Use of PRS Monitoring System Information
Are the goals of the PRS monitoring system clearly defined? Are the needs of the stakeholders clearly
understood? How are the outputs of the system used and incorporated within the government and beyond?
Poverty reduction strategy
● What types of data are needed for the PRS indicators?
● How would you assess the PRS in terms of its treatment of indicators?
a. relevant to the subject and PRS objectives
b. consistent with PRS policy priorities
c. sufficient as a basis for assessing performance
d. clearly defined
e. accessible at a reasonable cost
f. can be independently validated
g. time bound
Budget and planning
● Are agencies required to present monitoring information in support of their budget and medium-term expenditure
framework submissions? Are there any incentives to encourage this? Are there incentives likely to distort the quality of
the data?
● Does the ministry of finance or other agencies engage line ministries in dialogue on their policy choices based on
performance information?
● If yes, what information is required when submitting budget proposals?
a. retrospective and prospective information on ministry spending
b. information on ministry outputs
c. information on sector outcomes and impacts
d. results of formal evaluations and reviews
● Is a separate body responsible for national planning? If so, what types of information does it require for submissions on
sectoral inputs to national plans?
a. retrospective and prospective information on ministry spending
b. information on ministry outputs
c. information on sector outcomes and impacts
d. results of formal evaluations and reviews
Local Government and Agencies
● Is there evidence of a demand for monitoring and evaluation data among local governments and agencies? What forms of
data are being requested or would be relevant to local agencies and governments?
● Does the PRS monitoring system provide feedback and information flows to local governments and service providers?
What is the dissemination strategy?
● Is such information used at the local level (such as for an incentive system to improve the performance of service
providers)?
● Are the timing and form of the outputs provided to local governments and agencies adapted to the needs of these entities?
Line Ministries
● Do sector ministries use information as a basis for their own planning and management? Is there any specific evidence of
the use of data to inform poverty-related policy at the sectoral level?
● Do line ministries have the capacity to produce such information? Do line ministries have strategies to disseminate
monitoring information and outputs within their sectors? Are data quality and relevance an issue?
● Do line ministries rely on the PRS monitoring system? On information produced by other agencies? Are the timing and
form of outputs produced by the monitoring system appropriate to the needs of the ministries?
Page 60
54
● Do line ministries communicate their needs to system management?
Parliament
● Does the PRS monitoring system embrace a strategy for disseminating monitoring outputs on poverty to parliament?
Does the system provide for parliament as one of the users? Are the timing and form of outputs appropriate to the needs
of parliament?
● How does parliament use the information provided by the monitoring system, the finance ministry, or sector ministries?
Use it in formal hearings among parliamentary committees? In other ways?
● Does parliament communicate its data needs informally or formally through legislation requiring particular information?
● Does parliament have the capacity to use monitoring information effectively?
Development partners
● What are the monitoring and reporting requirements of development partners? \
● Are development partners using the PRS monitoring system for their own monitoring and reporting needs?
What other mechanisms are they using (other project and program monitoring systems, internal systems, and
so on)?
● Is the demand for monitoring and evaluation among development partners the main source of demand in the
country? If yes, is this because existing national capacity cannot serve development partners and domestic
clients at the same time or because there is little domestic demand?
● What is the impact of the demand by development partners on agencies that produce data and information?
● Have development partners coordinated their monitoring requirements?
Civil society
● Are strong pressures exerted by civil society—the media, nongovernmental organizations, universities,
interdisciplinary research entities, and so on—on government for information about the performance of
government in reducing poverty?
● Does the PRS monitoring system have a strategy for disseminating monitoring outputs to the general public?
Are the timing and form of the outputs appropriate to the needs of the various audiences among the public?
● Is monitoring and evaluation information published widely in the media?
● Does civil society communicate its data needs formally to the PRS monitoring system?
Page 61
55
Annex 3
Diagnostic tool for assessment of the quality of M&E arrangements Items Questions
I. Policy
1. Evaluation Plan
Is there a comprehensive evaluation plan, indicating what to evaluate, why,
how, for whom?
2. M versus E
Is the difference and the relationship between M and E clearly spelled out?
3. Autonomy and
impartiality
Is the need for autonomy and impartiality explicitly mentioned? Does the M&E
plan allow for tough issues to be analyzed? Is there an independent budget?
4. Feedback Is there an explicit and consistent approach to reporting, dissemination,
integration?
5. Alignment with planning
and budgeting
Is there integration of M&E results in planning and budgeting?
II. Indictors, Data Collection and Methodology
6. Selection of indicators Is it clear what to monitor and evaluate? Is there a list of indicators?
7. Selection of criteria Are the criteria for the selection of indicators clear? And who selects?
8. Priority Setting Is the need acknowledged to set priorities and limit the number of indicators be
monitored?
9. Causality chain Are different levels of indicators (input-output-outcome-impact) explicitly
linked (programme theory)?
10. Methodologies used Is it clear how to monitor and evaluate? Are methodologies well identified and
mutually integrated?
11. Data collection Are sources of data collection clearly identified? Are indicators linked to
sources of data collection?
III. Organization
12. Co-ordination and
oversight
Is there an appropriate institutional structure for co-ordination, support, central
oversight and feedback? With different stakeholders?
13. Statistical office Are surveys, censuses etc. streamlined into M&E needs? Is the role of the
Statistical Office in M&E clear?
14. Line ministries Are there M&E units in line ministries and semi-governmental institutions
(parastatals), and are these properly relayed to a central unit?
15. Decentralized levels Are there M&E units at decentralized levels and are these properly relayed to a
central unit?
16. Link with projects Is there any effort to relay/co-ordinate with donor M&E mechanisms for
projects?
IV. Capacity Building
17. Capacity diagnosis Are current capacity strengths and weaknesses identified?
18. Capacity-building plan Are there plans for remediation? Do these include training, appropriate
salaries?
V. Participation of non-governmental actors
19. Parliament Is the role of Parliament properly recognized, and is there alignment with
parliamentary control and oversight procedures?
20. Civil society Is the role of civil society recognized? Are there clear procedures for the
participation of civil society? Is the participation institutionally arranged or
rather ad-hoc?
21. Donors Is the role of donors recognized? Are there clear procedures for participation of
donors?
VI. Use
22. In annual progress
reports
Is there a presentation of relevant M&E results? Are results compared with
targets? Is there an analysis of discrepancies?
23. Within country Are M&E outputs (e.g. APR) also used for internal purposes? E.g. within
national policy-making and/or policy-influencing and advocacy?
Source: Holvoet et al, 2012:771