This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Jonathan A. Morell, Ph.D.Director of Evaluation – Fulcrum [email protected]://evaluationuncertainty.com(734) 646-8622
Presented to theUnited Nations Development ProgrammeFebruary 20th, 2014
Complex system behavior drives unexpected outcomesNetwork effectsPower law distributionsIgnoring bifurcation pointsState changes and phase shiftsUncertain and evolving environmentsFeedback loops with different latenciesSelf organization and emergent behaviorIgnoring full range of stable and unstable conditions in a systemEtc.
Guaranteed evaluation solutionPost-test onlyTreatment group onlyUnstructured data collection
But we loose many evaluation toolsTime series dataComparison groupsSpecially developed surveys and interview protocolsQualitative and quantitative data collection at specific times in a project’s life cycleEtc.
Why the loss? Because establishing evaluation mechanisms requireTimeEffortMoneyNegotiations with program participants, stakeholders, and other parties
Some Examples of the Kinds of Problems we may Run Into
Program Outcome Evaluation is Looking for
Possible Unexpected Outcomes
Evaluation Design Weakness
Free and reduced fees for post-natal services
Survey/interviewHealth indicators for mother and childChild development indicators
Drug and supply hoarding
New sets of informal fees
Lower than expected use of service
No interview or observation to estimate amount of fees
No way to correlate fees with attendance or client characteristics
Improve agricultural yield
Records, interviews, observationsYieldNew system costProfit
Perverse effects of increased wealth disparities
No other communities to check on other reasons for disparity
No interviews to check on consequences disparities
FundingDeadlinesLogic modelsMeasurement Program theoryResearch designInformation use plansDefining role of evaluatorLogistics of implementationPlanning to anticipate and respond to surprise
These methods are most useful for detecting leading indicators
Forecasting & program monitoring
System based logic modeling
· Get lucky· Knowledge from stakeholders· Good program theory· Use research literature· Use experts
· Complex system behavior makes prediction impossible no matter how clever we are.
PS – do not assume that complex systems are always unpredictable!
Foreseeable Unforeseeable
The trick is to do a little better than the
Delphic oracle
Example: Agricultural Yield
Outcome Evaluated For
Possible Unexpected Outcomes Evaluation Design Weakness
Records, interviews, observationsYieldNew system costProfit
Perverse effects of increased wealth disparities
No other communities to check on other reasons for disparity
No interviews to check on consequences disparities
Evaluation Methodology: Expand Monitoring Outside Boarders of Agriculture Program
Evaluation Redesign Adopt a “whole community” perspectiveIdentify a wide range of social indicatorsIdentify a diverse set of key informantsConduct regular open-ended interviewing
· Get lucky· Knowledge from stakeholders· Good program theory· Use research literature· Use experts
· Complex system behavior makes prediction impossible no matter how clever we are.
PS – do not assume that complex systems are always unpredictable!
Foreseeable Unforeseeable
Retooling program theory
Agile methodology
Data choices
Example Free / Reduced Fees for Post-Natal Services
Outcome Evaluated For Possible Unexpected Outcomes Evaluation Design Weakness
Survey/interviewHealth indicators for mother and childChild development indicators
Drug and supply hoarding New sets of informal fees Lower than expected use of
service
No interview or observation to estimate amount of fees
No way to correlate fees with attendance or client characteristics
Add a process component to the evaluation designSurvey of mothers to assess total cost of service Open ended interviews with clinic staff about consequences of the new system for their work lives
Nice to say, but agile evaluation can be expensiveDo we want both?Do we want only one of these tactics?These are the kinds of questions that have to be added to all the other decisions we make when designing an evaluation