UC Berkeley DeCal Program
GLOBAL POVERTY & IMPACT EVALUATION:  

LEARNING WHAT WORKS FOR THE WORLD’S POOR

Fall 2009
Tuesdays 5:00 – 5:00 PM
Location: 122 Wheeler
 
Student Facilitators:
Garret Christensen  (garret AT econ DOT berkeley DOT edu)
Erick Gong (egong AT are DOT berkeley DOT edu)
 
Instructor of Record:
Ted Miguel   (emiguel AT econ DOT berkeley DOT edu)

 
Sponsored by the Center of Evaluation for Global Action 
CEGA is a multi-disciplinary research center at the University of California, Berkeley advancing global health and development through impact evaluation and economic analysis. The Center is premised on the principle that knowledge gained from randomized trials and other forms of impact evaluation is a valuable public good that can improve policy and outcomes around the world.
 

Course Content: The course will cover impact evaluation theory (causal inference, experimental design and basic statistics) as well as methods (randomization, difference-in-difference, regression discontinuity, and propensity score matching). The curriculum will be applied, with weekly case studies of field research drawn from the international development literature. Discussions of methods will include issues related to research ethics and the protection of human subjects. At the end of the course, students will have the opportunity to present their own impact evaluation research projects and get feedback from CEGA faculty members.

Audience:
The course is ideally suited for graduate and advanced undergraduate students with an interested in impact evaluation.  Graduate students in Public Policy, Public Health,  Education,  Political Science, ERG,  and Sociology, and undergraduates who have taken statistics courses may benefit the most from this course.  The curriculum is very applied and will be useful for students engaged in international development field projects, social entrepreneurship, and policy analysis.  Please email one of the student facilitators if you have questions about whether this course is the right fit given your interests and background. 

Learning Outcomes:
Students who complete this course will be prepared to:  1) distinguish research-based “best practices” from those that have not been rigorously evaluated;  2) design an impact evaluation of a policy or intervention, and  3) evaluate data using a statistical software package.
For students who are considering conducting an impact evaluation of a program, facilitators will provide references to technical resources (e.g. textbooks on sample design and software for power calculations) and guidelines for developing a rigorous study.

Methods of Instruction:
During class, facilitators will present the main concepts in short lectures structured around case studies (suggested readings from the literature), which will also serve as the basis for class discussion and small group activities. Lectures will discuss the strongest (most rigorous) evaluation methods and the shortcomings of weak evaluation methods. Case studies will highlight research from Africa, Asia, and South America as well as the U.S. and will cover programs related to health, governance, education, and agriculture. Group work will provide hands-on experience with research design and data analysis.

Grading: 
Students will be graded on the following: 1) attendance, 2) participation in discussion, 3) 4 short problem sets (approx 1 hour of work each) and 4) a group presentation.  Students who miss two days of lecture (not including the first week’s introduction) will be in danger of failing the course.  For every lecture that a student misses, the student will need to submit a one-page summary/reaction to the lecture slides or referenced papers (posted below).  Depending on time availability, class size, and students’ interests, group presentations will take place in the final two weeks of class.

Assignments: 
The problem sets are designed to teach students how to apply the four methods (randomization, difference-in-difference, regression discontinuity, and propensity score matching) using statistical software (STATA)  with actual data.  An example of STATA code will be provided for each problem set.  Listed below are the four problem sets with the emphasis in parenthesis and the due dates. 

Problem Set 1 (Randomized Evaluations):  Handed Out Sept 22nd, Due Oct 3rd.
Problem Set 2 (Regression Discontinuity): Handed Out Oct 13, Due Oct 27th.
Problem Set 3 (Matching / Propensity Score): Handed Out Oct 27th, Due Nov 10th.
Problem Set 4 (Difference-in-Difference): Handed out Nov 10th, Due Nov 24th.

The group presentation will involve either a written or oral presentation brief research proposal.  Details.

Software
Problem sets will require STATA, a statistical software program widely used in impact evaluations.  We recommend that students install STATA on their computer in order to complete the problem sets.  If you need to purchase a copy,   a single-user one-year license for Small Stata  (sufficient for this course) is available through Berkeley’s GradPlan for $48.  Note that a license allows you to install the software on up to three of your own computers. See www.stata.com/order/schoollist.html to purchase (select CA, then UCB, then product code SMSOFTAGS). 

Students can also access STATA in the computer labs at 1535 Tolman Hall during drop in hours.  If you need access, we will issue you a login and password.  Drop in hours for the Tolman Computer labs can be found at (http://facility.berkeley.edu/labs/hourstmf.html)


Schedule:
*
Asterisks imply optional readings that may not be discussed directly in class but are likely relevant and helpful.

September 1:
Course Introduction

         
Slides

September 8: Introduction to impact evaluation in international development
 
Banerjee, Abhijit et. al.. Making Aid Work. The MIT Press. 2007.
 
Duflo, Esther. Scaling Up and Evaluation. Annual World Bank Conference on Development
Economics, 2004.

Easterly, William. Can the West Save Africa? Journal of Economic Literature, 2009.

Lecture Slides
  
September 15: Randomized Evaluations 1: Introduction, methodology, and the basic
econometrics   
(Case Study: conditional cash transfers in Mexico)  
 
Duflo, Esther, Rachel Glennerster, and Michael Kremer. Using Randomization in
Development Economics Research
: A Toolkit. Poverty Action Lab White Paper, MIT.
 
Schultz, T. Paul. School Subsidies for the Poor: Evaluating the Mexican Progresa Poverty
Program
. Journal of Development Economics. June 2004, 199-250.

*Fisman, Raymond and Edward Miguel. “Chapter 8.” In Economic Gangsters. Princeton, New Jersey: Princeton University Press, 2008.
 
Lecture Slides

September 22: Randomized Evaluations II: Applications
(Case Studies: housing vouchers in the US, microfinance in South Africa, and agriculture in
Kenya )
 
Kling, Jeffrey, Jeffrey Liebman, and Lawrence Katz. Experimental Analysis of
Neighborhood Effects
. Econometrica, January 2007, 83-119.
 
*Karlan, Dean and Jonathan Zinman. Credit Elasticities in Less Developed Countries:
Implications for Microfinance
. American Economic Review, forthcoming.
 
Duflo Esther, Michael Kremer and Robinson J. How high are rates of return to fertilizer?
Evidence from field experiments in Kenya
. American Economic Review, May 2008, 482-488. 

Lecture Slides

Problem Set 1 Small Data


Problem Set 1 Assignment & Sample Do File

Problem Set 1 Answers
 
September 29: Randomized Evaluations III: Complications, Externalities
(Case Study: deworming drugs in Kenya)
 
Kremer, Michael and Edward Miguel. Worms: Identifying Impacts on Education and Health
in the Presence of Treatment Externalities
. Econometrica. January 2004, 159-217.
 
*Kremer, Michael and Edward Miguel. The Illusion of Sustainability. Quarterly Journal of
Economics
. August 2007, 1007-1065.

Lecture Slides
 
   
October 6: Research Ethics  
(Case Study: HIV prevention educational programs in Kenya)

Dupas, Pascaline. Relative Risks and the Market for Sex: Teenage Pregnancy, HIV, and
Partner Selection in Kenya
. Working paper. 

Lecture Slides

Research Presentation Details


October 13: Regression Discontinuity
(Case Studies: scholarship program for girls in Kenya, educational finance in Chile)
 
Unpublished results from follow-up on a girl’s merit scholarship program.  For a description
of the intervention, see Kremer, Michael et al. Incentives to Learn. NBER Working Paper
#10971. 2004.
 
Chay, Ken et al. The Central Role of Noise in Evaluating Interventions that Use Test Scores
to Rank Schools
. American Economic Review. September 2005, 1237-1258. 

Intro Worksheet

Lecture Slides

Problem Set 2 Assignment

Problem Set 2 Data

Problem Set 2 Answers

October 20: External Validity
(Case Studies: anti-corruption programs in Indonesia and Brazil, & community-based monitoring
of health clinics in Uganda)  
 
Olken, Benjamin. Monitoring Corruption: Evidence from a Field Experiment in Indonesia.
Journal of Political Economy. April 2007, 200-249.
 
Ferraz, Claudio and Frederico Finan. Exposing Corrupt Politicians: The Effects of Brazil’s
Publicly Released Audits on Electoral Outcomes
. Quarterly Journal of Economics, May
2008, 703-745..
 
Bjorkman, Martina and Jakob Svensson. Power to the People: Evidence from a Randomized
Field Experiment of a Community-Based Monitoring Project in Uganda
. Community-Based
Monitoring of Primary Health Care PCEPR Working Paper # 6344. June 2007.

Lecture Slides

Case Studies
 
October 27: Matching, Propensity Score  
(Case studies: water infrastructure and children’s health in India & workfare in Argentina)
 
Jalan, Jyotsna and Martin Ravallion. Does Piped Water Reduce Diarrhea for Children in
Rural India?
Journal of Econometrics. January 2003, 153-173.
 
Jalan, Jyotsna and Martin Ravallion. Estimating the Benefit Incidence of an Antipoverty
Program by Propensity Score Matching
. Journal of Business and Economic Statistics.
January 2003, 19-30.

Annan, Jeannie and Christopher Blattman. The Consequences of Child Soldiering. Forthcoming in Review of Economics and Statistics.

Lecture Slides

Lecture Exercise

Problem Set 3
         
          Dataset for Small Stata Users

November 3: Data Quality, Logistics  
 
Baird, Sarah, Joan Hamory, and Edward Miguel.  Tracking, Attrition and Data Quality in the
Kenya Life Panel Survey Round 1
. Working paper.

Worksheet

Lecture Slides
 
      
November 10: Differences in Differences
(Case studies: malaria eradiation in the Americas and land reform in India)
 
Bleakley, Hoyt. Malaria Eradication in the Americas: A Retrospective Analysis of Childhood
Exposure
.  Working paper.
 
Besley, Timothy and Robin Burgess. Land Reform, Poverty Reduction, and Growth:
Evidence from India
. Quarterly Journal of Economics. May 2000, 389-430.

Worksheet

Lecture Slides

Lecture Derivation

Problem Set 4
Small Stata Data Set
Intercooled/SE Stata Data Set
 
November 17: Power Calculations

Lecture Slides

           Miguel's Slides

           Jamie's Slides

           Willa's Slides

December 1: Summary/Group Presentations

Methods Summary Slides

Implementation Issues Slides

Guidance Questions

Summary Table

Glossary




For more information regarding research or employment opportunities, please visit CEGA's website, Innovations for Poverty Action's job listings, or the Poverty Action Lab's job listings. Or see Impact Evaluation at the World Bank,  the International Initiative for Impact Evalutaion (3ie), or the Network of Networks on Impact Evaluation (Nonie).
free hit
counter