The NIH plan to fix social science

Here is an overview of what is up, here is the plan itself.  Since it was produced by a bureaucracy rather than a blogger, it is hard to wade through the verbiage.  Nonetheless one of the bottom lines is a call for greater unity of methods and especially terms, so as to make discrete studies by different researchers more easily comparable, searchable, and aggregated into broader meta-studies, for instance:

In response to these types of measurement concerns, the Patient-Reported Outcomes Measurement Information System (PROMIS) developed a common scale or metric on which all measures of a given construct can be expressed. To achieve this, PROMIS developed and tested item banks using modern psychometric theory that, in addition to producing more precise and efficient measures, allow different measures of the same construct to be cocalibrated. As a result, different instruments measuring the same construct can be expressed on a single metric, aiding data harmonization and integration.

Another approach to addressing this data harmonization and integration challenge is to develop consensus measures for specific constructs. PhenX, for example, has developed a curated set of measurement protocols for specific phenotypic constructs. The NCI Grid-Enabled Measures website utilizes a crowdsourcing wiki approach to cataloging the various measures of a given social or behavioral construct. The National Library of Medicine has generated a directory of common data elements that serves as a repository for commonly accepted measures and data structures that, if adopted by researchers, would facilitate data integration across studies.

The original pointer is from Mitchell Eckert.  Keep in mind economists that, depending on your definition of economics, the NIH arguably supports at least as much economics research as does the NSF.

You might also be interested in University of Wisconsin job market candidate Nathan Yoder, whose main paper, a theory paper, is on improving incentives for academic research.  Here is the latter part of the abstract:

In keeping with current practice, the institution contracts based on the experiment’s result instead of its methodology. This removes a degree of freedom from the optimal design problem, but I show that there need not be loss from doing so. The optimal contract has two general characteristics. First, to discourage the production of false positive results, negative results supporting conventional wisdom must be rewarded. Second, the most informative results must be disproportionately rewarded. To arrive at these conclusions, I contribute to the literature by characterizing solutions and comparative statics of Bayesian persuasion problems using differentiability.

These topics remain very much understudied.


Comments for this post are closed