International Journal for Re-Views in Empirical Economics

Replication is critical for scientific progress and integrity but incentives for replication have been low. It’s good news, therefore, that a new journal will be devoted solely to replication research:

The International Journal for Re-Views in Empirical Economics (IREE) is the first journal dedicated to the publication of replication studies based on economic micro-data. Furthermore, IREE publishes synthesizing reviews, micro-data sets and descriptions thereof, and articles dealing with replication methods and the development of standards for replications.

As yet, authors of replication studies, data sets and descriptions had a hard time gaining recognition for their work by citable publications and incentives for conducting these important kinds of work were immensely reduced….IREE provides the platform to authors to be given credit for serious empirical research in economics.

The publication of replication studies often depends on their result….replications usually need to reject the original study to get published whereas a scientific impact is denied for replications confirming original findings. This induces a severe publication bias….Therefore, IREE publishes research independent of the result of the study. The selection of published articles is based on technical and formal criteria but not with regards to the qualitative and quantitative results.

Deaton, Wooldridge, and Easterlin are all involved.

Hat tip: David Roodman on twitter.

Addendum: Also check out the the inaugural Empirical Legal Studies Replication Conference which will publish papers, independent of result, in an edition of the International Review of Law and Economics.


Regarding the Addendum (Empirical Legal Studies), if economics can become more like law (i.e., advocacy), then law (advocacy) can become more like economics (empiricism). Maybe the two will meet some place near the middle. [As I understand it, empirical legal studies isn't like law and economics but rather an empirical approach to determining what the law is.]

an empirical approach to determining what the law is.

I can save us all a lot of time and effort. The law is whatever the most people or the most people with the guns say it is.

Jump to: navigation, search

John P. A. Ioannidis

August 21, 1965 (age 52)

Greek American

Alma mater
University of Athens Medical School
Athens College
Scientific career


Stanford School of Medicine

John P. A. Ioannidis (born August 21, 1965 in New York City) is a Professor of Medicine and of Health Research and Policy at Stanford University School of Medicine and a Professor of Statistics at Stanford University School of Humanities and Sciences.

He is director of the Stanford Prevention Research Center, and co-director, along with Steven N. Goodman, of the Meta-Research Innovation Center at Stanford (METRICS).[1][2] He was chairman at the Department of Hygiene and Epidemiology, University of Ioannina School of Medicine as well as adjunct professor at Tufts University School of Medicine.[3][4] He is best known for his research and published papers on scientific studies, particularly the 2005 paper "Why Most Published Research Findings Are False".[5] Ioannidis is one of the most-cited scientists across the scientific literature, especially in the fields of clinical medicine and social sciences, according to Thomson Reuters' Highly Cited Researchers 2015.[6]

Ioannidis's 2005 paper "Why Most Published Research Findings Are False"[7] has been the most downloaded technical paper from the journal PLoS Medicine.[14]

In another 2005 paper, Ioannidis analyzed "49 of the most highly regarded research findings in medicine over the previous 13 years". The paper compared the 45 studies that claimed to have uncovered effective interventions to subsequent studies with larger sample sizes: 7 (16%) of the studies were contradicted, 7 (16%) had effects that were smaller in the second study than in the first, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged.[15]

He has made many other influential empirical evaluations addressing the validation and replication performance of different types of studies in diverse scientific fields, including genetics,[16] clinical trials,[17] and neuroscience.[18] His work has also aimed to identify solutions on how to optimize research practices[19] and to increase the yield of validated and useful scientific findings.[20]

He also coined the term Proteus phenomenon for the occurrence of extreme contradictory results in the early studies performed on the same research question. He has also made a number of contributions in the field of meta-analysis (the science of combining data from multiple studies on the same research question) and has been President of the Society for Research Synthesis Methodology.

The Reproducibility Project: Psychology was a collaboration of 270 contributing authors to repeat 100 published experimental and correlational psychological studies. This project was led by the Center for Open Science and its co-founder, Brian Nosek, who started the project in November 2011. The results of this collaboration were published in August 2015. Reproducibility is the ability to produce a copy or duplicate, in this case it is the ability to replicate the results of the original studies. The project has illustrated the growing problem of failed reproducibility in social science. This project has started a movement that has spread through the science world with the expanded testing of the reproducibility of published works.[1]

A brief summary would have been perfect.

See my 2005 post giving a simple guide to Ioannidis's paper.

Another desperate, grasping attempt by Marginal Revolution [both bloggers stand accused in the dock] to conflate economics with science.

Give it up already-if you want to be taken seriously, why didn't you study physics or chemistry?


Excellent. Not just for the field but also for the contributors, who can get publishing credit for doing work that is useful though not pathbreaking. Replication of published studies won't be the sort of thing that will get you tenure at Harvard, but could or should be the type of work at a teaching college that demonstrates that you're actively keeping up with the field and doing some sort of work even if not wholly original.

When replications show that the original study was seriously wrong, especially regarding signs of important coefficients, not just magnitudes or levels of significance, they should be publishable and published in journals of at least the level that the original studies were published in, preferably in the journals they were published in. It is unfortunate that this is rarely the case, although I doubt that starting a journal to do it is going to help all that much. Maybe it will some, and I support it for that reason, if with not much optimism.

Comments for this post are closed