Impact Evaluation in Education Introduction to Monitoring and Evaluation Andrew Jenkins 23/03/14
Impact Evaluation in Education
Introduction to
Monitoring and Evaluation Andrew Jenkins
23/03/14
I am cutting rocks
http://img359.imageshack.us/img359/7104/picture420bt2.jpg
The essence of theory of change – linking activities to intended outcomes
I am building a temple
Theory of change
“the process through which it is expected that inputs will be converted to expected outputs, outcome and impact”
DfID Further Business Case Guidance “Theory of Change”
Theory of change
Start with a RESULTS CHAIN
The results chain: tips
Activities Outputs Outcomes
We produce Influence Contribute to
We control We control ClientsClients control
We are accountable for We expect Should occurr
100% attribution Some attribution Partial attribution
Readily changed Less flexibility to change Long term
Delivered annually By end of program Long-term
Monitoring – activities and outputs
Personal Monitoring Tools
No monitoring - blind and deaf
Monitoring and Evaluation
Monitoring Efficiency Measures how productively inputs (money, time,
personnel, equipment) are being used in the creation of outputs (products, results)
An efficient organisation is one that achieves its objectives with the least expenditures of resources
Evaluation Effectiveness Measures the degree to which results / objectives
have been achieved An effective organisation is one that achieves its
results and objectives
10
All Most Some
InputsOutputs
Outcomes
MONITORINGfocused on project process
(per individual project)
EVALUATIONfocused on effectiveness of project process
(for many projects)
Resources Staff Funds Facilities Supplies Training
Project deliverables achieved “Count” (quantified) what has been done
Short and intermediate effects.
Long term effects and changes
Resist temptation, there must be a better way!
• Clear objectives• Few key indicators• Quick simple methods• Existing data sources• Participatory method• Short feed-back loops• Action results!
Monitoring/Evaluation objectives must be SMART
• Specific• Measurable• Achievable • Realistic• Timed
(see 10 Easy Mistakes, page 5)
Evaluation: who evaluates whom?
The value of a joint approach
The Logical Chain
1.Define Objectives (and Methodology)
2.Supply Inputs
3.Achieve Outputs
4.Generate Outcome
5. Identify and Measure Indicators
6.Evaluate by comparing Objectives
with Indicators
7.Redefine Objectives (and Methodology)
www.brac.net
Impact Evaluation
• An assessment of the causal effect of a project, program or policy beneficiaries. Uses a counterfactual…
Impacts = Outcomes - What would have
happened anyway
www.brac.net
When to use Impact Evaluation?Evaluate impact when project is:
•Innovative•Replicable/ scalable•Strategically relevant for reducing poverty•Evaluation will fill knowledge gap•Substantial policy impact•Use evaluation within a program to test alternatives and improve programs
www.brac.net
Impact Evaluation Answers
What was the effect of the program on outcomes?
How much better of the beneficiaries because of the
program policies?
How would outcome change if changed program design?
Is the program cost-effective?
www.brac.net
Different Methods to measure impact evaluation
Randomised Assignment – experimental
Non Experimental: Matching Difference-in-Difference Regression Discontinuity Design Instrumental Variable / Random
Promotion
www.brac.net
Randomization
• The “gold standard” in evaluating the effects of interventions
• It allows us to form a “treatment” and “control” groups– identical characteristics – differ only by intervention
Counterfactual: randomized-out group
www.brac.net
Matching• Matching uses large data sets and heavy
statistical techniques to construct the best possible artificial comparison group for a given treatment group.•Selected basis of similarities in observed characteristics•Assumes no selection bias based on unobservable characteristics.
Counterfactual: matched comparison group
www.brac.net
Difference-in-difference• Compares the change in outcomes
overtime between the treatment group and comparison group
• Controls for the factors constant overtime in both groups
• ‘parallel trends’ in both the groups in the absence of the program
Counter-factual: changes over time for the non- participants
www.brac.net
Design When to use
Randomization ► Whenever possible► When an intervention will not be
universally implemented
Random Promotion ► When an intervention is universally implemented
Regression Discontinuity ► If an intervention is assigned based on rank
Diff-in-diff ► If two groups are growing at similar rates
Matching ► When other methods are not possible► Matching at baseline can be very useful
Uses of Different Design
www.brac.net
Qualitative and Quantitative Methods
Qualitative methods focus on how results were achieved (or not).
They can be very helpful for process evaluation.
It is often very useful to conduct a quick qualitative study before planning an experimental (RCT) study.
Thank you!