There is a long-standing debate as to whether nonexperimental estimators of causal effects of social programs can overcome selection bias. Most existing reviews either are inconclusive or point to significant selection biases in nonexperimental studies. However, many of the reviews, the so-called between-studies, do not make direct comparisons of the estimates. We survey four impact studies, all using data from development interventions that directly compare experimental and nonexperimental impact estimates. Our review illustrates that when the program participation process is well understood, and correctly modeled, then the nonexperimental estimators can overcome the selection bias to the same degree as randomized controlled trials. Hence, we suggest that evaluators of development programs aim to be careful and precise in the formulation of the statistical model for the assignment into the program and also to use the assignment information for model-based systematic sampling.
American Journal of Evaluation, 2013, Vol 34, Issue 3, p. 320-338