Click here to load reader
Click here to load reader
Mar 19, 2016
EVALUATING AND ASSESSING THE IMPACT OF GOVERNANCE PROGRAMMES
An ongoing challenge!
OverviewBackgroundDefinitionsGeneral trends in Evaluation and IA debatesGovernance M&E and Impact Assessment (IA)LiteratureBINGO practiceConclusions and implications for CARE Buzz group discussions and questions
What is the difference between impact assessment and evaluation?Is it about the nature of the change?The scale of change ?Timing of the exercise?Accountability reporting versus learning?Difference between methodological approaches and data collection methods
Political pressure to demonstrate results and VFMBut confusion about if/how impact should be defined and measured Old tensions and power issues alive: Between upward accountability for donors and downward accountability, learning and empowermentAttribution versus contribution General context and debates
Debates becoming more nuancedNo longer simply about Quantitative versus qualitativeSubjective versus objectiveEconomists versus the restMore incentive to go deeper:What are the purposes of different evaluation and IA approaches?What are their strengths and weaknesses? What are the differences?Which elements are compatible and which are not?Source adapted from Chambers 2008 by Holland
Random control trialsGood for attributing impactbut:Not good for empowering or downward accountabilityBroader ethical issuesA-theoretical: look at isolated variables so not good at explaining how and why change happensNot good for generalisationIgnore spill over effects
Theory of change approach More theoreticalBetter for learning: testing assumptions identifying/exploring how/why change happened in efforts to identify good practiceSome argue if combined with RCTs could enable accountability and learning (Lucas and Longhurst 2010)But may not be very effective in more complex systems
Realistic evaluation approachMore concerned with learning than accountabilityLess confident in a priori theories of changeMore interested in identifying what change is happening/has happened and why to identify and enhance positive impacts for particular groups as a basis for future programme theory (Lucas and Longhurst 2010)
Developmental evaluation (informed by complexity/systems thinking)
Purpose and scope of traditional vs. complexity-oriented evaluationsSource Rangalingham 2008
TraditionalComplexity-orientedMeasure success against predetermined goalsDevelop new measures and monitoring mechanisms as goals emerge & evolveRender definitive judgments of success or failureProvide feedback, generate learning, support direction or affirm changes in directionAim to produce generalisable findings across time & spaceAim to produce context-specific understandings that inform ongoing innovationCreates fear of failureSupports hunger for learning
Implications of complexity thinkingEvaluate from perspective of multiple levels of interconnected systems, study feedback between the organisation and its environment, look for emergent rather than planned change
Dynamics and nature of change: Look for non-linearity, anticipate surprises and unexpected outcomes, analyse the system dynamics over time, look for changes in conditions that facilitate systemic change, and how well matched programme is to the wider system
People, motivations and relationships: Study patterns of incentives and interactions among agents, study (e)quality of relationships, study individuals and informal / shadow coalitions, vs. formal organisation, etcWhat is plannedWhat actually happensSource adapted from Rangalingham 2008
Values Ideas about knowledge and evidence positivist versus interpretive Ideas about how change happens: systems/complexity thinkers versus reductive/ Neo-Newtonian linear views etc
Possible areas of difference
New areas for compatibility?Participatory statistics as now shown to enable: Standardisation and commensurability: In Malawi and Uganda to evaluate outcomes and assess impact using rigorous representative samples Scale: measuring empowerment in BangladeshAggregation: Several examples of successful aggregation from focus groupsRepresentativeness: above use sampling techniques deemed rigorous and at least as reliable than traditional alternatives
Source from Dee Jupp accessed in Holland forthcoming
Mirror debates in general literature e.g attribution versus contributionMuch discussion related to international governance indicators (Possible source of secondary data)Governance M&E and IA literature
Governance M&E and IA literatureRaises serious questions about assumptions and theories of changeDo objective du jure policy decisions lead to de facto outcomes? Does citizens voice leads to increased government responsiveness and accountability?Does more transparency lead to enhanced accountability?Do democratic outcomes lead to development outcomes?
More nuanced debatesPolitical issues: Is governance impact just about achieving MDGs? Is it also about political and civil rights and freedom?Methodological:How should governance impact/ outcomes be defined? Who should be responsible for changes in complex programme systems?Attribution impossible - most likely association as good as it getsSubjective/objective false distinction; perception based data very acceptable: but whose perceptions?Participatory numbers challenge: quantitative/qualitative distinctionSo why lack of documented examples of use of participatory methods?Need to use approaches that enhance learning and enable better understanding of governance change processImplications: pluralist approaches
Pluralist approachesTOC e.g. ODI Jones 2010 for advocacyInductive approaches to producing quantitative data from case studies (Gaventa and Barrett 2010)IETA report by McGee and Gaventa 2010 outlines pros and cons of several approaches and methods. They are not mutually exclusive . many of the methods could be used within TOC, realistic or development approaches. They include:Quant surveys or use of secondary dataQualitative surveysOutcome mappingMost significant changeCritical stories of changeParticipatory approaches Sensemaker (quantifying data from story telling)
Numbers: a note of cautionWGIs - the best-known and most widely cited indicators of the quality of governance - are highly attractive to elite groups yet almost useless, if not actively misleading, for lay decision-makers. For good reasons their legitimacy is likely to be highly contested (Pollitt, 2009).
BINGO trends: E and IA a source of tensionIncreased competition Increased pressure to show results and impactLack of professional norms and standardsPoor learning and accountabilityGrowing need for high profile fundraising and advocacy work...causing problems for M&ESource: Rangalingham 2008
BINGO trends: M&E and IA practiceEfforts to improve M&E practice using pluralist approachesMove away from evaluation being about reporting to learning and changing attitudes and behaviourMove away from attribution to contribution Christian Aids leverageFocus on outcomes rather than impact Interest in looking at change through a power lens Reluctance to perform meaningless aggregationFrustration voice and accountability outcomes are relegated to output level
BINGO trends: common challengesFuzzy boundaries:the nature of governance domains of changelevels of change - outputs/outcomes/impacts etc Confusion can lead to subjective classificationsEven elements assumed simple in Northern offices are challenging in practiceHost of problems because indicators too vague or poorly specified
Implications?Emerging debates relevant challenges much broader than choosing indicatorsNeed to ensure approaches chosen fit with values and understandings of truth evidence and assumptions about how change happensCould a complexity lens help? Could QPM prove VFM methods within developmental approach e.g.. Measuring Empowerment Index for changes in citizen empowermentCommunity scorecards for measuring changes in state effectiveness
Implications of Ubora meta-level indicators for CO evaluation & IA?
What are the conceptual links between definitions of outcomes, impacts, related indicators and baselines in time bound donor funded projects and longer term programmes? How can approaches to evaluation in short term projects help to build a convincing case for contribution to longer term programme change? What secondary data is available to indicate governance related changes for your impact populations? Is it produced through approaches that are compatible with your values and understandings of change? Do indicators selected fit with UNAIDs criteria for good indicators?
Criteria for a good indicator:Is it needed and meaningful?Does it track significant change?Has it been tested/ will it work?Is it feasible: are resources available to collect data & analyse? Is it consistent with & does it add meaning to overall indicator sets? If quantitative is it defined in metrics consistent with overall system Is it fully specified:Clear purpose and rationale? Qualitative as well as quantitative aspects well defined?Identified with clear methods to collect data?Specifies how frequently data should be collected?Disaggregated as appropriate?Includes guidelines on how to interpret? Source adapted from UNAIDs 2008 in Holland and Thirkell