# Regression Discontinuity Works

Robert Lalonde’s famous 1986 paper, Evaluating the Econometric Evaluations of Training Programs with Experimental Data, shattered the confidence of the profession by showing that the advanced econometric techniques of the day, by and large, failed to recover the results from a randomized controlled trial. The profession has been busy since that time developing new methods and techniques.

A new paper compares regression discontinuity with RCTs and RD works very well.

Theory predicts that regression discontinuity (RD) provides valid causal inference at the cutoff score that determines treatment assignment. One purpose of this paper is to test RD’s internal validity across 15 studies. Each of them assesses the correspondence between causal estimates from an RD study and a randomized control trial (RCT) when the estimates are made at the same cutoff point where they should not differ asymptotically. However, statistical error, imperfect design implementation, and a plethora of different possible analysis options, mean that they might nonetheless differ. We test whether they do, assuming that the bias potential is greater with RDs than RCTs. A second purpose of this paper is to investigate the external validity of RD by exploring how the size of the bias estimates varies across the 15 studies, for they differ in their settings, interventions, analyses, and implementation details. Both Bayesian and frequentist meta‐analysis methods show that the RD bias is below 0.01 standard deviations on average, indicating RD’s high internal validity. When the study‐specific estimates are shrunken to capitalize on the information the other studies provide, all the RD causal estimates fall within 0.07 standard deviations of their RCT counterparts, now indicating high external validity. With unshrunken estimates, the mean RD bias is still essentially zero, but the distribution of RD bias estimates is less tight, especially with smaller samples and when parametric RD analyses are used.