Program Evaluation

Program evaluation is a vast field that encompasses many different research traditions including case studies, process tracing, stakeholder analysis, causal modeling, and cost-benefit analysis.  The emphasis of this course will be on the quantitative modeling of program impact.  As such, the theme of unbiased results will be prominent throughout the course.  Students will learn how to think through causal modeling using correlation analysis (regression), and applying the appropriate techniques to limit bias (panel methods, instrumental variables, propensity score matching, regression discontinuity design, and natural experiments).  The course will also present a variety of research designs that allow for the isolation of program impacts from other factors that contribute to variation in program outcomes. The objective of the course is to give students the tools that they will need to be responsible producers and consumers of program evaluations.

 

Regression Notes for R

  • Quick Start [ pdf ]

  • Additional Notes [ pdf ]

  • Fixed Effects Syntax [ html ]

  • Instrumental Variables Example [ html ]

Lectures

  • Lecture 00: Course Overview [ ppt ]

  • Lecture 01: Regression Review [ ppt ]

  • Lecture 02: Interpreting Program Impact [ ppt ]

  • Lecture 03: Partitioning the Variance of Y [ ppt ]

  • Lecture 04: Ballentine Venn Diagrams [ ppt ]

  • Lecture 05: Partitioned Regression [ ppt ]

  • Lecture 06: Bias in Regression [ ppt ]

  • Lecture 07: Instrumental Variables [ ppt ]

  • Lecture 08: Panel Data Methods (Fixed Effects) [ ppt ]

  • Lecture 09: Seven Sins of Regression [ ppt ]

  • Lecture 10: Selection and Matching [ ppt ]

  • Lecture 11: Research Design for Evaluation [ ppt ]

  • Lecture 12: Time Series, Regression Discontinuity, and Survival Analysis [ ppt ]

Simulation of Regression Standard Errors and Bias [ R file ] [ cases script ] [ zipped ]

Problems Sets

  • Homework #1: Regression Review [ problems ]

  • Homework #2: Confidence Intervals [ problems ]

  • Homework #3: Partitioned Regression [ problems ] [ data ]

  • Homework #4: Omitted Variable Bias [ problems ] [ data ]

  • Homework #5: Fixed Effects and Instrumental Variables [ problems ] [ data ]

Campbell Scores (exercise to measure internal validity)  [ overview ] [ ppt ]  

  • Exercises based on case studies from: Bingham, R. D., & Felbinger, C. L. (2002). Evaluation in practice: A methodological approach. Chatham House Publishers/Seven Bridges Press. 2nd Edition [ Amazon link ]

  • CH 5 Solutions [ doc ]

  • CH 7 Solutions [ doc ]

  • CH 8 Solutions [ doc ]

  • CH 9 Solutions [ doc ]

  • CH 10 Solutions [ doc ]

  • CH 11 Solutions [ doc ]

  • CH 12 Solutions [ doc ]

  • CH 20 Solutions [ doc ]

  • CH 21 Solutions [ doc ]

Exams

Midterm Review  [ problems
Midterm Exam  [ 2013 ]  [ 2014 ]

Final Exam Study List [ doc ]
Final Exam[ 2013 ]  [2014 ]

Regression Simulations [ github ]

 

Demonstration of 95% confidence intervals. Approximately 95 out of 100 samples will result in confidence intervals that contain the true slope. Note, if a confidence interval contains zero, the slope is not statistically significant at the 0.05 level.

If we want to increase the statistical power to ensure that we have more confidence in our results (achieve statistical significance) we can increase the sample size from 10 to 25. Notice how much smaller the confidence intervals become.

If we have a small sample size, but a large effect size, we still have good statistical power. There is a trade-off between the size of the program effect, and how much data we need to detect it.

Measurement Error: imprecise measures of variables impacts the model in different ways depending upon whether it is in the dependent or independent variable.