Consumer DNA testing — and the mountain of data it has generated — has become pervasive enough that it’s possible to identify about six of every 10 people in the U.S. who are of European descent, even if they’ve never given a sample.
It is hard to do better than Alex’s video on Romer, pretty much definitive and Romer liked it too. Most importantly, Romer won the Prize for seeing how the non-rival nature of ideas can boost ongoing and indeed “endogenous” economic growth. Romer also showed mathematically that this process of growth is bounded, namely that it does not explode without limit, and that the associated mathematical models were tractable. Previously, economists had feared that increasing returns to scale models might be impossible to work with. See the top two links here, for the 1989 and 1990 pieces, with the third piece listed, from 1994, being Romer’s easier to read summary of the work.
David Warsh’s Knowledge and the Wealth of Nations: A Story of Economic Discovery, is the book on Romer for the ages, a truly splendid creation on both the science and the person. Romer, by the way, is the son of Roy Romer, former Colorado governor and famous builder of airports. I believe this later influenced Paul’s interest in the importance of economic growth.
Over time, increasing returns models are seen as less descriptive of growth than perhaps they were in the 1990s. The growth rates of many countries have been stagnant or even falling, rather than rising. Nonetheless, for understanding how ideas boost growth, and in cumulative fashion, Romer’s work is essential. If you are wondering “which economist has done the most to help us explain Silicon Valley,” you would turn first to Romer.
This Prize is not a surprise at all, and it has been expected, sooner or later, for many years. (Though I did not think it would come this year. Trump talks so much about his role in boosting economic growth, I feared the Nobel Committee would not at this point in time wish to feed into that rhetoric. I am glad to see they did not hesitate!)
Here is Romer on Twitter. Here is Romer on Wikipedia. Here is Paul’s blog. He is now at NYU, but spent much of his career at Stanford. Previous MR coverage of Romer is quite extensive. Here is the Prize Committee citation, excellent as always. Here are his three podcasts with Russ Roberts. Here is Joshua Gans on Paul. Here is a Sebastian Mallaby profile of Romer.
Romer also in 2000 started and ran a successful business, Aplia, which revolutionized on-line education. In the context of economics, Aplia is most notable for enabling curve-shifting exercises and the like to be done through an electronic portal. It was later purchased by Cengage. So like Nordhaus, Romer also has been a doer, including in the private sector. Yet Paul once tweeted to Ben Bernanke that “Rich is over-rated.” It is too hard to convert money into satisfaction.
Romer recently served as Chief Economist at the World Bank, with a somewhat complicated tenure. You can find numerous articles about this in the media.
Romer has been a central figure behind the notion of “charter cities,” namely an economic region but with external or possibly foreign governance, so as to enforce the rule of law and spur economic growth. The charter cities idea comes rather naturally out of Romer’s work on the economics of growth. Think of Romer as asking “which is the non-rival public good which can be extended at very low cost?”, and wondering if that might be law. Here is his famous TED talk on charter cities. Here is an interview with Romer on charter cities. He was originally slated to work with the Honduran government on charter cities, though he dropped out of the project in 2012. Here is Paul’s account of what happened.
Amihai Glazer and I once wrote a comment on Romer, on his article with Barro on ski-lift pricing, which Glazer and I saw as closely connected to Buchanan’s theory of clubs. Romer later credited this comment with inducing him to rethink what the notion of rivalry really means in economics, and leading to his two best-known pieces on economic growth; see the David Warsh book for more detail.
Like myself, Romer is an avid fan of the guitarist Clarence White, and several times we have traded favorite Clarence White videos by email. Romer believes (correctly) that the role of Clarence White in the success of the Byrds is very much underrated, and furthermore he is a big fan of White’s early work with the Kentucky Colonels. Here is more on Romer’s excellent taste in music, recommended.
Romer also has a well-known survey piece on the importance of human capital for economic growth; human capital of course is where new ideas come from.
Here is a short Romer piece from 2016, suggesting his own work on growth implies “conditional optimism” on climate change, but not “complacent optimism.” This ties together his work with that of Nordhaus.
Romer is also an advocate of regularizing the spelling of the English language, so as to make it more phonetic. He believes this would boost the rate of economic growth, and it ties in with some of his work on economic integration and growth. If English is an easier language to learn, the global economy as a whole in effect becomes larger, and we might expect the rate of ideas generation to rise.
Here is Romer on Jupyter vs. Mathematica. Here is Romer on corruption in Greece, he has very broad interests. Here is Romer on TARP and banking reform. Here is Romer’s recent critique of macroeconomics.
Romer believes (and I concur) that the word “and” is used too much in writing, and in particular scholarly writing. From the FT:
Circulating a draft of the upcoming World Development Report, Mr Romer warned against bank staff trying to pile their own pet projects and messages into the report. The tendency, he argued, had diluted the impact of past reports and led to a proliferation of “ands”.
“Because of this type of pressure to say that our message is ‘this, and this, and this too, and that …’ the word ‘and’ has become the most frequently used word in Bank prose,” he complained in an email.
“A WDR, like a knife, has to be narrow to penetrate deeply,” he added. “To drive home the importance of focus, I’ve told the authors that I will not clear the final report if the frequency of ‘and’ exceeds 2.6%.”
I have always found Romer to be extremely pleasant and open in my interactions with him, and I am very pleased to have interviewed him (no transcript or audio available) at a summer ideas festival (Kent Presents) the year before this one. The crowd found him very open and engaging.
These are excellent Nobel Prize selections, Romer for economic growth and Nordhaus for environmental economics. The two picks are brought together by the emphasis on wealth, the true nature of wealth, and how nations and societies fare at the macro level. These are two highly relevant picks. Think of Romer as having outlined the logic behind how ideas leverage productivity into ongoing spurts of growth, as for instance we have seen in Silicon Valley. Think of Nordhaus as explaining how economic growth interacts with the value of the environment. Here is their language:
- 2018 Sveriges Riksbank Prize in Economic Sciences is awarded jointly to William D Nordhaus “for integrating climate change into long-run macroeconomic analysis” and Paul M Romer “for integrating technological innovations into long-run macroeconomic analysis”.
Both are Americans, and both have highly innovative but also “within the mainstream” approaches. So this is a macro prize, but not for cycles, rather for growth and long-term economic prospects. Here is the Prize committee citation, always well done.
Both candidates were considered heavy favorites to win the Prize, sooner or later, and these selections cannot come as a surprise. Perhaps it is slightly surprising that they won the Prize together, though the basic logic of such a combination makes good sense. Here are previous MR mentions of Nordhaus, you can see we have been mentioning him for years in connection with the Prize.
Nordhaus is professor at Yale, and most of all he is known for his work on climate change models, and his connection to various concepts of “green accounting.” To the best of my knowledge, Nordhaus started working on green accounting in 1972, when he published with James Tobin (also a Laureate) “Is Growth Obsolete?“, which raised the key question of sustainability. Green accounting attempts to outline how environmental degradation can be measured against economic growth. This endeavor is not so easy, however, as environmental damage can be hard to measure and furthermore gdp is a “flow” and the environment is (often, not always) best thought of as a “stock.”
Nordhaus developed (with co-authors) the Dynamic Integrated Climate-Economy Model, a pioneering effort to develop a general approach to estimating the costs of climate change. Subsequent efforts, such as the London IPCC group, have built directly on Nordhaus’s work in this area. The EPA still uses a variant of this model. The model was based on earlier work by Nordhaus himself in the 1970s, and he refined it over time in a series of books and articles, culminating in several books in the 1990s. Here is his well-cited piece, with Mendelsohn and Shaw, on how climate change will affect global agriculture.
Nordhaus also was an early advocate of a carbon tax and furthermore note that his brother Bob wrote part of the Clean Air Act, the part that gave the government the right to regulate hitherto-unmentioned pollutants in the future. The Obama administration, in its later attempts to regulate climate, cited this provision.
I would say that much of Nordhaus’s work has its impact through being “done,” rather than through being “read.” Few economists have read through this model, which has computer programs and spreadsheets at its core. But virtually all economists read about the results of such models and have a general sense of how they work. The most common criticism of such models, by the way, is simply that their results are highly sensitive to the choice of discount rate.
In recent years, Nordhaus has shifted his emphasis to the risks from climate change, for instance in his book The Climate Casino: Risk, Uncertainty, and Economics for a Growing World. Marty Weitzman offers a good review, as does Krugman.
Assorted pieces of information on Nordhaus:
Nordhaus was briefly Provost at Yale. He also ended up being co-author on Paul Samuelson’s famous textbook in economics.
He co-authored a recent paper arguing we are not near the economic singularity; in this area his work intersects with Romer’s quite closely.
Bill Nordhaus, 72, a Yale economist who is seen as a leading contender for a Nobel Prize, came up with the idea of a carbon tax and effectively invented the economics of climate change. Bob, 77, a prominent Washington energy lawyer, wrote an obscure provision in the Clean Air Act of 1970 that is now the legal basis for a landmark climate change regulation, to be unveiled by the White House next month, that could close hundreds of coal-fired power plants and define President Obama’s environmental legacy.
Bob, Bill’s brother, once said: ““Growing up in New Mexico,” he said, “you’re aware of the very fragile ecosystem.””
Perhaps my personal favorite Nordhaus paper is on the returns to innovation. Don Boudreaux summarized it well:
In a recent NBER working paper – “Schumpeterian Profits in the American Economy: Theory and Measurement” – Yale economist William Nordhaus estimates that innovators capture a mere 2.2% of the total “surplus” from innovation. (The total surplus of innovation is, roughly speaking, the total value to society of innovation above the cost of producing innovations.) Nordhaus’s data are from the post-WWII period.
The smallness of this figure is astounding. If it is anywhere close to being an accurate estimate, the implication is that “society” pays a paltry $2.20 for every $100 worth of welfare it enjoys from innovating activities.
There again you will see a complete intersection with the ideas of Romer. Another splendid and still-underrated paper by Nordhaus is on the economics of light. Nordhaus argues that gdp figures understate the true extent of growth, and shows that the relative price of bringing light to humans has fallen more rapidly than gdp growth figures alone might indicate. Check out this diagram. Here is a BBC summary of what Nordhaus did, in other words rates of price inflation have been lower than we thought and thus rates of real gdp growth higher.
Again, you will see Nordhaus and Romer intersecting on this key idea of economic growth.
Last but not least, Nordhaus was a pioneer on the theory of the political business cycle, namely the idea that politicians deliberately manipulate the economy, using monetary and fiscal policy, so as to boost their chances of reelection. Dare I suggest that this idea might be making a comeback?
Addendum: From Margaret Collins by email: “I’d like to call your attention to Professor Nordhaus’ longstanding association with the International Institute for Applied Systems Analysis (IIASA), the international science and policy research institution located just outside Vienna. He worked at IIASA shortly after the institute’s creation in 1972, and his work there is closely bound to the issues the Nobel Committee cites in the award — he was employed for a year in 1974-75, doing pioneering work on climate as part of IIASA’s Energy Program, and producing a working paper entitled “Can We Control Carbon Dioxide?”. That was perhaps the first economics treatment of of climate change — and Nordhaus dates his work on climate as having begun there. He has visited IIASA numerous times in the intervening years, and remains a close collaborator, particularly with Nebojsa Nakicenovic, the Institute’s Deputy Director.”
And, from the comments: “Nordhaus also helped pioneer the use of satellite imagery of night time lights as a tool for measuring economic growth, where we’ve played around with some of the publicly available tools to support various analysis.”
Here is coverage from The Chronicle, the bottom line is that a number of humanities journals were trolled by phony submissions, and yes the journals accepted some absurd articles.
I would frame the matter somewhat differently, and perhaps more cynically. Not every undergraduate major can have majors as smart and as rigorous as we find say in mathematics. And yes I do mean some of the humanities majors. In the resulting equilibrium, the rigor and smarts of associated faculty vary across fields as well. The top people in quantum mechanics have passed through some pretty tough filters. But again, we cannot usefully generalize those filters across all fields and majors to a country where such a high percentage of people attend college. (Slow improvement can come from K-12 progress, of course, and we should fight for that.) Some of the majors have to be easier than others, no names will be named. By the way, don’t assume that basket-weaving is such an easy skill!
So simply calling for higher standards in the fields you object to begs the question. Instead ask “what are those fields for?” And “might I prefer a different kind of error process in those fields?” And “Might I want those fields to be (partly) bad in a very different way?” You probably have to compare bad against bad, not bad against “my personal sense of what clearly would be better.”
After such inquiries, you still will find that too much bogus work is being researched and published in journals. The most rigorous fields in turn tend to have too much irrelevant or overspecialized work — is all of string theory or for that matter game theory so much to be envied?
Many of you will be inclined to call for fewer subsidies. I won’t tackle that larger question right now, I’ll just note that any system-wide subsidies — especially egalitarian ones — also will boost the less rigorous fields and majors, and in some manner you need to be prepared to live with the not entirely rigorous consequences of that.
Overall I view bad pieces in the humanities as a potential profit opportunity, rather than something to just whine about. You don’t like those troll-published pieces? Get to work!
Addendum: You will note that the sociology journals were not fooled by the troll submissions. By many outsiders sociology is a much-underrated field.
In the excellent The Secret of Our Success Joe Henrich gives many examples of complex technological products and practices which were not the product of intelligence but rather of many, small, poorly understood improvements that were transmitted culturally down the generations. Derex et al. offer an ingenious experimental test of the cultural generation hypothesis.
Participants in the experiment were presented with a wheel with some weights that could be moved along four axis and they were asked to place the weights to maximize the speed at which the wheel moved down a track. The problem isn’t trivial since an optimal solution requires placing the weights in different spots to take advantage of both inertial and potential energy. Participants were organized into chains of five. Each participant was given 5 trials. The weight configuration and the results of the last two trials were passed on to the next person in the chain. Thus, people farther down the chain potentially “inherit” more cultural knowledge.
What were the results? First, the average wheel speed increased down the generations from an average of 123.6 m/h for the final trial of the first generation/participant to 145.7 m/h by the last trial of the fifth generation. The researchers also tested whether participants improved their understanding of the causes of wheel speed by asking them to predict which of a series of wheel configurations would spin the fastest. If the faster speed of the fifth generation reflected learning by doing we would expect the fifth generation to make better predictions. In fact, there was no learning over time. Technology improved, understanding did not.
The authors then did an especially clever test. They allowed each generation/participant to leave the next generation a “theory” of wheel speed. Did this “book learning” speed up the evolution of technology? It did not. Moreover, theory transmission didn’t even result in much learning! Indeed, in some respects theories actually reduced learning because people who inherited a theory tended to believe it to the exclusion of other theories and, as a result, they reduced their exploration of the design space.
Of the 56 participants who received a theory… 15 received an inertia-related theory, 17 received an energy-related theory, 6 received a full theory and 18 received diverse, irrelevant theories
…inherited theories strongly affected participant’s understanding of the wheel system. Participants who did not inherit any theory (“Configurations” treatment) scored similarly (and better than chance) on questions about inertia and questions about energy (Fig. 3I). In comparison, participants who inherited an inertia- or energy- related theory showed skewed understanding patterns. Inheriting an inertia-related theory increased their understanding of inertia, but decreased their understanding of energy; symmetrically, inheriting an energy-related theory increased their understanding of energy, but decreased their understanding about inertia. One explanation for this pattern is that inheriting a unidimensional theory makes individuals focus on the effect of one parameter while blinding them to the effects of others. However, participants’ understanding may also result from different exploration patterns. For instance, participants who received an inertia-related theory mainly produced balanced wheels (Fig. 3F), which could have prevented them from observing the effect of varying the position of the wheel’s center of mass.
…These results suggest that the understanding patterns observed in participants who received unidimensional theories is likely the result of the canalizing effect of theory transmission on exploration. Note that in the present case, this canalizing effect is performance-neutral: with our 2-dimensional problem, better understanding of one dimension and worse understanding of one dimension simply compensate each other. For a many-dimensional problem, though, better understanding of one dimension is unlikely to compensate for worse understanding of all the others.
One aspect of knowledge transmission which is more difficult to study is the role of the genius. Cultural generation can get stuck in local optima. Only the genius can see over the valley to the mountain. The occasional genius may have been important even in knowledge generation in the pre-science era. In addition, these kinds of cultural evolution processes work best when feedback is quick and clear. Lengthen the time between input and output and all bets are off. Still this peculiar experiment illustrates how much cultural transmission can achieve and how theory can so dominate our thinking that it reduces vital experimentation.
Pindyck, from MIT, is a leading expert in this area, here is part of his summary conclusion:
It would certainly be nice if the problems with IAMs [integrated assessment models] simply boiled down to an imprecise knowledge of certain parameters, because then uncertainty could be handled by assigning probability distributions to those parameters and then running Monte Carlo simulations. Unfortunately, not only do we not know the correct probability distributions that should be applied to these parameters, we don’t even know the correct equations to which those parameters apply. Thus the best one can do at this point is to conduct a simple sensitivity analysis on key parameters, which would be more informative and transparent than a Monte Carlo simulation using ad hoc probability distributions. This does not mean that IAMs are of no use. As I discussed earlier, IAMs can be valuable as analytical and pedagogical devices to help us better understand climate dynamics and climate–economy interactions, as well as some of the uncertainties involved. But it is crucial that we are clear and up-front about the limitations of these models so that they are not misused or oversold to policymakers. Likewise, the limitations of IAMs do not imply that we have to throw up our hands and give up entirely on estimating the SCC [social costs of carbon] and analyzing climate change policy more generally.
Donna Strickland (at right) was on Tuesday named one of the three winners of the 2018 Nobel Prize in Physics. Many have noted that she is the first woman in 55 years to win the prize. The BBC noted in a radio interview that Strickland is an associate professor at the University of Waterloo and asked why she was not a full professor. She said she never applied. She laughed when asked if she would apply now.
It’s a lot of work to apply for full professor, in terms of compiling one’s dossier, writing a research and teaching statement, cultivating letter writers, and so on. At many schools you might get a raise of say $1500 for the promotion? Apply Canadian tax rates to that. That could be accompanied by more administrative responsibilities, such as pressure to become department chair at some point.
Hail Donna Strickland!
Here is my earlier description of Emergent Ventures. In addition to the general request for proposals, we are looking to fund research in two particular directions, so if you are interested I would encourage you to apply here. Here goes:
1. What do we know about the best ways to search for additional talent? What features characterize successful talent searches?
2. How do people make “big” decisions? This could include the decision to migrate from one country to another, the decision to change religions, the decision to start a new business, to marry, and so on. Are there general principles here? What is known or believed, either theoretically or empirically?
We are open as to what form a contribution might take, but as a default I am envisioning a paper (on either) of say 60-80 pp., surveying and conceptually summarizing academic literature, but written for very smart non-academics and with a somewhat practical bent.
I encourage you to apply, both on these topics and more generally.
An elementary mathematical theory based on “selectivity” is proposed to address a question raised by Charles Darwin, namely, how one gender of a sexually dimorphic species might tend to evolve with greater variability than the other gender. Briefly, the theory says that if one sex is relatively selective then from one generation to the next, more variable subpopulations of the opposite sex will tend to prevail over those with lesser variability; and conversely, if a sex is relatively non-selective, then less variable subpopulations of the opposite sex will tend to prevail over those with greater variability. This theory makes no assumptions about differences in means between the sexes, nor does it presume that one sex is selective and the other non-selective. Two mathematical models are presented: a discrete-time one-step statistical model using normally distributed fitness values; and a continuous-time deterministic model using exponentially distributed fitness levels.
That is from a new paper by Theodore P. Hill, via Derek. Here is some of the history behind the paper, which ended up being “spiked.” And here is Andrew Gelman’s take. Here are relevant emails to the dispute.
Even with a question mark my title, Do Boys Have a Comparative Advantage in Math and Science, is likely to appear sexist. Am I suggesting the boys are better at math and science than girls? No, I am suggesting they might be worse.
Consider first the so-called gender-equality paradox, namely the finding that countries with the highest levels of gender equality tend to have the lowest ratios of women to men in STEM education. Stoet and Geary put it well:
Finland excels in gender equality (World Economic Forum, 2015), its adolescent girls outperform boys in science literacy, and it ranks second in European educational performance (OECD, 2016b). With these high levels of educational performance and overall gender equality, Finland is poised to close the STEM gender gap. Yet, paradoxically, Finland has one of the world’s largest gender gaps in college degrees in STEM fields, and Norway and Sweden, also leading in gender-equality rankings, are not far behind (fewer than 25% of STEM graduates are women). We will show that this pattern extends throughout the world…
Two explanations for this apparent paradox have been offered. First, countries with greater gender equality tend to be richer and have larger welfare states than countries with less gender equality. As a result, less is riding on choice of career in the richer, gender-equal countries. Even if STEM fields pay more, we would expect small differences in personality that vary with gender would become more apparent as income increases. Paraphrasing John Adams, only in a rich country are people feel free to pursue their interests more than their needs. If women are somewhat less interested in STEM fields than men, then we would expect this difference to become more apparent as income increases.
A second explanation focuses on ability. Some people argue that more men than women have extraordinary ability levels in math and science because of greater male variability in most characteristics. Let’s put that hypothesis to the side. Instead, lets think about individuals and their relative abilities in reading, science and math–this what Stoet and Geary call an intra-individual score. Now consider the figure below which is based on PISA test data from approximately half a million students across many countries. On the left are raw scores (normalized). Focus on the colors, red is for reading, blue is science and green is mathematics. Negative scores (scores to the left of the vertical line) indicate that females scores higher than males, positive scores that males score higher on average than females. Females score higher than males in reading in every country surveyed. Females also score higher than males in science and math in some countries.
Now consider the data on the right. In this case, Stoet and Geary ask for each student what subject are they relatively best at and then they average by country. The differences by sex are now even even more prominent. Not only are females better at reading but even in countries where they are better at math and science than boys on average they are relatively better at reading.
Thus, even when girls outperformed boys in science, as was the case in Finland, girls generally performed even better in reading, which means that their individual strength was, unlike boys’ strength, reading.
Now consider what happens when students are told. Do what you are good at! Loosely speaking the situation will be something like this: females will say I got As in history and English and B’s in Science and Math, therefore, I should follow my strengthens and specialize in drawing on the same skills as history and English. Boys will say I got B’s in Science and Math and C’s in history and English, therefore, I should follow my strengths and do something involving Science and Math.
On average, females have about the same average grades in UP (“University Preparation”, AT) math and sciences courses as males, but higher grades in English/French and other qualifying courses that count toward the top 6 scores that determine their university rankings. This comparative advantage explains a substantial share of the gender difference in the probability of pursing a STEM major, conditional on being STEM ready at the end of high school.
Put (too) simply the only men who are good enough to get into university are men who are good at STEM. Women are good enough to get into non-STEM and STEM fields. Thus, among university students, women dominate in the non-STEM fields and men survive in the STEM fields.
Finally Stoet and Geary show that the above considerations also explain the gender-equality paradox because the intra-individual differences are largest in the most gender equal countries. In the figure below on the left are the intra individual differences in science by gender which increase with gender equality. A higher score means that boys are more likely to have science as a relative strength (i.e. women may get absolutely better at everything with gender equality but the figure suggests that they get relatively better at reading) and on the right the share of women going into STEM fields which decreases with gender equality.
The male dominance in STEM fields is usually seen as due to a male advantage and a female disadvantage (whether genetic, cultural or otherwise). Stoet and Geary show that the result could instead be due to differences in relative advantage. Indeed, the theory of comparative advantage tells us that we could push this even further than Stoet and Geary. It could be the case, for example, that males are worse on average than females in all fields but they specialize in the field in which they are the least worst, namely science and math. In other words, boys could have an absolute disadvantage in all fields but a comparative advantage in math and science. I don’t claim that theory is true but it’s worth thinking about a pure case to understand how the same pattern can be interpreted in diametrically different ways.
That is the new and excellent book by David Quammen, here is one summary excerpt:
We are not precisely who we thought we were. We are composite creatures, and our ancestry seems to arise from a dark zone of the living world, a group of creatures about which science, until recent decades, was ignorant. Evolution is tricker, far more intricate, than we had realized. The tree of life is more tangled. Genes don’t move just vertically. they can also pass laterally across species boundaries, across wider gaps, even between different kingdoms of life, and some have come sideways into our own lineage — the primate lineage — from unsuspected, nonprimate sources. It’s the genetic equivalent of a blood transfusion or (different metaphor, preferred by some scientists) an infection that transforms identity. “Infective heredity.” I’ll say more about that in its place.
My favorite part of the book is the section, starting on p.244, on bacteria that are resistant to antibiotics that have not yet been invented. Overall this is likely to prove the best popular science book of the year, you can buy it here. Here are various reviews of the book.
A paper in Science covering over 80 thousand articles in 923 scientific journals finds that rejected papers are ultimately cited more than first acceptances.
We compared the number of times articles were cited (as of July 2011, i.e., 3 to 6 years after publication, from ISI Web of Science) depending on their being first-intents or resubmissions. We used methods robust to the skewed distribution of citation counts to ensure that a few highly cited articles were not driving the results. We controlled for year of publication, publishing journal (and thus impact factor), and their interaction. Resubmissions were significantly more cited than first-intents published the same year in the same journal.
The author’s argue that the most likely explanation is that peer review increases the quality of manuscripts. Rejection makes you stronger. That’s possible although the data are also consistent with peers being more likely to reject better papers!
Papers in economics are often too long but papers in Science are often too short. Consider the paragraph I quoted above. What is the next piece of information that you are expecting to learn? How many more citations does a resubmission receive! It’s bizarre that the paper never gives this number (as far as I can tell). Moreover, take a look at the figure (at right) that accompanies the discussion. The authors say the difference in citations is highly significant but on this figure (which is on log scale) the difference looks tiny! This figure is taking up a lot of space. What is it saying?
So what’s the number? Well if you go to the online materials section the authors still don’t state the number but from a table one can deduce that resubmissions receive approximately 7.5% more citations. That’s not bad but we never learn how many citations first acceptances receive so it could be less than 1 extra citation.
There’s something else which is odd. The authors say that about 75% of
published articles are first-submissions. But top journals like Science and Nature reject 93% or more of submissions. Those two numbers don’t necessarily contradict. If everyone submits to Science first or if everyone never resubmits to Science then 100% of papers published in Science will be a first submission. Nevertheless, the 93% of papers that are rejected at top journals (and lower ranked journals also have high rejectance rates) are going somewhere so for the system as whole 75% seems implausibly high.
Econ papers sometimes exhaust me with robustness tests long after I have been convinced of the basic result but Science papers often leave me puzzled about basic issues of context and interpretation. This is also puzzling. Shouldn’t more important papers be longer? Or is the value of time of scientists higher than economists so it’s optimal for scientists to both write and read shorter papers? The length of law review articles would suggest that lawyers have the lowest value of time except that doesn’t seem to be reflected in their consulting fees or wages. There is a dissertation to be written on the optimal length of scientific publications.
Scientists in developed countries provide nearly three times as many peer reviews per paper submitted as researchers in emerging nations, according to the largest ever survey of the practice.
The report — which surveyed more than 11,000 researchers worldwide — also finds a growing “reviewer fatigue”, with editors having to invite more reviewers to get each review done. The number rose from 1.9 invitations in 2013 to 2.4 in 2017…
The report notes that finding peer reviewers is becoming harder, even as the overall volume of publications rises globally (see ‘Is reviewer fatigue setting in?’).
File under “the cost disease strikes back.” Furthermore, it seems increasingly obvious that a lot of lesser journals just don’t matter, and that may discourage prospective referees from putting in the effort. And note:
In 2013–17, the United States contributed nearly 33% of peer reviews, and published 25.4% of articles worldwide. By contrast, emerging nations did 19% of peer reviews, and published 29% of all articles.
China stood out — the country accounted for 13.8% of scientific articles during the period, but did only 8.8% of reviews.
Nearly thirty years ago my GMU colleague Robin Hanson asked, Could Gambling Save Science? We now know that the answer is yes. Robin’s idea to gauge the quality of scientific theories using prediction markets, what he called idea futures, has been validated. Camerer et al. (2018), the latest paper from the Social Science Replication Project, tried to replicate 21 social-science studies published in Nature or Science between 2010 and 2015. Before the replications were run the authors run a prediction market–as they had done on previous replication research–and once again the prediction market did a very good job predicting which studies would replicate and which would not.
Ed Yong summarizes in the Atlantic:
Consider the new results from the Social Sciences Replication Project, in which 24 researchers attempted to replicate social-science studies published between 2010 and 2015 in Nature and Science—the world’s top two scientific journals. The replicators ran much bigger versions of the original studies, recruiting around five times as many volunteers as before. They did all their work in the open, and ran their plans past the teams behind the original experiments. And ultimately, they could only reproduce the results of 13 out of 21 studies—62 percent.
As it turned out, that finding was entirely predictable. While the SSRP team was doing their experimental re-runs, they also ran a “prediction market”—a stock exchange in which volunteers could buy or sell “shares” in the 21 studies, based on how reproducible they seemed. They recruited 206 volunteers—a mix of psychologists and economists, students and professors, none of whom were involved in the SSRP itself. Each started with $100 and could earn more by correctly betting on studies that eventually panned out.
At the start of the market, shares for every study cost $0.50 each. As trading continued, those prices soared and dipped depending on the traders’ activities. And after two weeks, the final price reflected the traders’ collective view on the odds that each study would successfully replicate. So, for example, a stock price of $0.87 would mean a study had an 87 percent chance of replicating. Overall, the traders thought that studies in the market would replicate 63 percent of the time—a figure that was uncannily close to the actual 62-percent success rate.
The traders’ instincts were also unfailingly sound when it came to individual studies. Look at the graph below. The market assigned higher odds of success for the 13 studies that were successfully replicated than the eight that weren’t—compare the blue diamonds to the yellow diamonds.