Top Banner
EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES ? An ongoing challenge!
25

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Mar 19, 2016

Download

Documents

Darius Bonat

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES. ? An ongoing challenge!. Overview. Background Definitions General trends in Evaluation and IA debates Governance M&E and Impact Assessment (IA) Literature BINGO practice Conclusions and implications for CARE - PowerPoint PPT Presentation
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE

PROGRAMMES

?An ongoing challenge!

Page 2: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Overview Background Definitions General trends in

Evaluation and IA debates

Governance M&E and Impact Assessment (IA) Literature BINGO practice

Conclusions and implications for CARE

Buzz group discussions and questions

Page 3: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Definitions?

What is the difference between impact assessment and evaluation? Is it about the nature of the change? The scale of change ? Timing of the exercise? Accountability reporting versus learning?

Difference between methodological approaches and data collection methods

Page 4: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Political pressure to demonstrate results and VFM But confusion about if/how impact should be defined and measured Old tensions and power issues alive:

Between upward accountability for donors and downward accountability, learning and empowerment

Attribution versus contribution

General context and debates

Page 5: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Debates becoming more nuanced

No longer simply about Quantitative versus qualitative Subjective versus objective Economists versus the rest

More incentive to go deeper: What are the purposes of

different evaluation and IA approaches?

What are their strengths and weaknesses?

What are the differences? Which elements are

compatible and which are not?

Source adapted from Chambers 2008 by Holland

Page 6: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Random control trials Good for attributing impact…

but: Not good for empowering

or downward accountability Broader ethical issues A-theoretical: look at

isolated variables so not good at explaining how and why change happens

Not good for generalisation Ignore spill over effects

Page 7: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Theory of change approach

More theoretical Better for learning: testing assumptions

identifying/exploring how/why change happened in efforts to identify ‘good practice’

Some argue if combined with RCTs could enable accountability and learning (Lucas and Longhurst 2010)

But may not be very effective in more complex systems

Page 8: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Realistic evaluation approach

More concerned with learning than accountability Less confident in a priori theories of change More interested in identifying what change is

happening/has happened and why to identify and enhance positive impacts for particular groups as a basis for future programme theory (Lucas and Longhurst 2010)

Page 9: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Developmental evaluation (informed by complexity/systems thinking)

Page 10: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Purpose and scope of traditional vs. complexity-oriented evaluationsTraditional Complexity-orientedMeasure success against predetermined goals

Develop new measures and monitoring mechanisms as goals emerge & evolve

Render definitive judgments of success or failure

Provide feedback, generate learning, support direction or affirm changes in direction

Aim to produce generalisable findings across time & space

Aim to produce context-specific understandings that inform ongoing innovation

Creates fear of failure Supports hunger for learning

Source Rangalingham 2008

Page 11: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Implications of complexity thinking

Evaluate from perspective of multiple levels of interconnected systems, study feedback between the organisation and its environment, look for emergent rather than planned change

Dynamics and nature of change: Look for non-linearity, anticipate surprises and unexpected outcomes, analyse the system dynamics over time”, look for changes in conditions that facilitate systemic change, and how well matched programme is to the wider system

People, motivations and relationships: Study patterns of incentives and interactions among agents, study (e)quality of relationships, study individuals and informal / shadow coalitions, vs. formal organisation, etc

What is planned

What actually happens

Source adapted from Rangalingham 2008

Page 12: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Values Ideas about knowledge

and evidence – positivist versus interpretive

Ideas about how change happens: systems/complexity thinkers versus reductive/ Neo-Newtonian linear views

etc

Possible areas of difference

Page 13: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

New areas for ‘compatibility’? Participatory statistics as now

shown to enable: Standardisation and

commensurability: In Malawi and Uganda to evaluate outcomes and assess impact using rigorous representative samples

Scale: measuring empowerment in Bangladesh

Aggregation: Several examples of successful aggregation from focus groups

Representativeness: above use sampling techniques deemed rigorous and at least as reliable than traditional alternatives

Source from Dee Jupp accessed in Holland forthcoming

Page 14: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Coming soon…….

http://www.bigpushforward.net/

Page 15: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Mirror debates in general literature e.g attribution versus contribution

Much discussion related to international governance indicators (Possible source of secondary data)

Governance M&E and IA literature

Page 16: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Governance M&E and IA literature

Raises serious questions about assumptions and theories of change Do objective du jure policy decisions lead to de facto outcomes? Does citizens voice leads to increased government responsiveness and accountability? Does more transparency lead to enhanced accountability? Do democratic outcomes lead to development outcomes?

Page 17: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

More nuanced debates Political issues:

Is governance impact just about achieving MDGs? Is it also about political and civil rights and freedom?

Methodological: How should governance impact/ outcomes be defined? Who should be responsible for changes in complex programme

systems? Attribution impossible - most likely association as good as it gets Subjective/objective false distinction; perception based data very

acceptable: but whose perceptions? Participatory numbers challenge: quantitative/qualitative distinction So why lack of documented examples of use of participatory

methods? Need to use approaches that enhance learning and enable better

understanding of governance change process Implications: pluralist approaches

Page 18: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Pluralist approaches TOC – e.g. ODI Jones 2010 – for advocacy Inductive approaches to producing quantitative data

from case studies (Gaventa and Barrett 2010) IETA report by McGee and Gaventa 2010 outlines pros

and cons of several approaches and methods. They are not mutually exclusive –. many of the methods could be used within TOC, realistic or development approaches. They include: Quant surveys or use of secondary data Qualitative surveys Outcome mapping Most significant change Critical stories of change Participatory approaches Sensemaker (quantifying data from story telling)

Page 19: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Numbers: a note of caution

“WGIs - the best-known and most widely cited indicators of the quality of governance - are highly attractive to elite groups yet almost useless, if not actively misleading, for lay decision-makers. For good reasons their legitimacy is likely to be highly contested” (Pollitt, 2009). 

Page 20: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

BINGO trends: E and IA a source of tension

Increased competition

Increased pressure to show results and impact

Lack of professional norms and standards

Poor learning and accountability

Growing need for high profile fundraising and advocacy work

...causing problems for M&E

Source: Rangalingham 2008

Page 21: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

BINGO trends: M&E and IA practice Efforts to improve M&E practice using pluralist

approaches Move away from evaluation being about

reporting to learning and changing attitudes and behaviour

Move away from attribution to contribution – Christian Aid’s leverage

Focus on outcomes rather than impact Interest in looking at change through a power

lens Reluctance to perform meaningless aggregation Frustration voice and accountability outcomes

are relegated to output level

Page 22: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

BINGO trends: common challenges  Fuzzy boundaries:

the nature of governance domains of change

levels of change - outputs/outcomes/impacts etc

Confusion can lead to subjective classifications

Even elements assumed ‘simple’ in Northern offices are challenging in practice

Host of problems because indicators too vague or poorly specified

Page 23: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Implications?  Emerging debates relevant – challenges

much broader than choosing indicators Need to ensure approaches chosen fit

with values and understandings of ‘truth’ ‘evidence’ and assumptions about how change happens

Could a complexity lens help? Could QPM prove VFM methods within

developmental approach e.g.. ‘Measuring Empowerment Index’ for

changes in citizen empowerment Community scorecards for measuring

changes in state effectiveness

Page 24: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Implications of Ubora meta-level indicators for CO evaluation & IA? What are the conceptual links between

definitions of outcomes, impacts, related indicators and baselines in time bound donor funded projects and longer term programmes?

How can approaches to evaluation in short term projects help to build a convincing case for contribution to longer term programme change?

What secondary data is available to indicate governance related changes for your impact populations? Is it produced through approaches that are compatible with your values and understandings of change?

Do indicators selected fit with UNAIDs criteria for good indicators?

Page 25: EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES

Criteria for a good indicator: Is it needed and meaningful? Does it track significant change? Has it been tested/ will it work? Is it feasible: are resources available to collect data & analyse? Is it consistent with & does it add meaning to overall indicator

sets? If quantitative –is it defined in metrics consistent with overall

system Is it fully specified:

Clear purpose and rationale? Qualitative as well as quantitative aspects well defined? Identified with clear methods to collect data? Specifies how frequently data should be collected? Disaggregated as appropriate? Includes guidelines on how to interpret?

Source adapted from UNAIDs 2008 in Holland and Thirkell