Zacharias Maniadis, Fabio Tufano & John List
American Economic Review, forthcoming
The experimental method is taking on increasing import within the economic science. We present a theoretical framework that provides insights into the optimal usage of the experimental method and the appropriate interpretation of experimental results. A key insight is that the rate of false positives depends not only on the observed significance level, but also on statistical power of the test and research priors. Through the lens of our model, we argue that most ‘surprising’ results published in the top scientific journals are likely false. As an example, we present evidence that a celebrated study with far-reaching economic implications reports results that are not replicable. The bad news is that this study is just one of hundreds that will not replicate. The good news is that a little replication goes a long way: a few independent replications dramatically increase the chances that the original finding is true.
The second link in the list, “How Economists (Mis)Use Experimental Methods,” provides an ungated pdf to the piece. The top link has an AEA member gate.
Hat tip goes to Kevin Lewis.