Category: Science

Is NIH funding seeing diminishing returns?

Scientific output is not a linear function of amounts of federal grant support to individual investigators. As funding per investigator increases beyond a certain point, productivity decreases. This study reports that such diminishing marginal returns also apply for National Institutes of Health (NIH) research project grant funding to institutions. Analyses of data (2006-2015) for a representative cross-section of institutions, whose amounts of funding ranged from $3 million to $440 million per year, revealed robust inverse correlations between funding (per institution, per award, per investigator) and scientific output (publication productivity and citation impact productivity). Interestingly, prestigious institutions had on average 65% higher grant application success rates and 50% larger award sizes, whereas less-prestigious institutions produced 65% more publications and had a 35% higher citation impact per dollar of funding. These findings suggest that implicit biases and social prestige mechanisms (e.g., the Matthew effect) have a powerful impact on where NIH grant dollars go and the net return on taxpayers investments. They support evidence-based changes in funding policy geared towards a more equitable, more diverse and more productive distribution of federal support for scientific research. Success rate/productivity metrics developed for this study provide an impartial, empirically based mechanism to do so.

That is by Wayne P. Wals, via Michelle Dawson.

On Preferring A to B, while also preferring B to A

When people evaluate two or more goods separately versus jointly it’s common to see “preference reversals”. In a random survey, for example, people were asked to value the following dictionaries:

  • Dictionary A: 20,000 entries, torn cover but otherwise like new
  • Dictionary B: 10,000 entries, like new

When asked to value just one dictionary, either A or B, the average value was higher on Dictionary B. But when people were asked to evaluate both dictionaries together the average value was higher on Dictionary A.

What’s going on? Most people have no idea how many words a good dictionary has so telling them that a dictionary has 10K or 20K entries just fades into the background–it’s a dictionary of course it defines a lot of words. On the other hand, we all know that “like new” is better than “torn cover” so dictionary A gets the higher price. When confronted with the pair of dictionaries, however, we see that Dictionary A has twice as many entries as Dictionary B and it’s obvious that more entries makes for a better dictionary and in comparison to more entries, the sine qua non of a dictionary, the torn cover fades into importance.

Cass Sunstein collects a bunch of these examples (these two from List and Lowenstein respectively):

  • Baseball Card Package A: 10 valuable baseball cards, 3 not-so-valuable baseball cards
  • Baseball Card Package B: 10 valuable baseball cards
  • Congressional Candidate A: Would create 5000 jobs; has been convicted of a misdemeanor
  • Congressional Candidate B: Would create 1000 jobs; has no criminal convictions

In each case B tends to have a higher value when evaluated separately but A tends to evaluate higher with joint evaluation. When is separate evaluation better? When is joint evaluation better?

There is a tendency to think that joint evaluation is always better since it is the “full information” condition. Sunstein pushes against this interpretation because he argues that full information doesn’t mean full rationality. Even with full information we may still be biased. The factor that becomes salient when the goods are evaluated jointly, for example, need not be especially relevant. Is a dictionary with 20k entries actually better than one with 10k entries? Maybe 95% of the time it’s worse because it takes longer to find the word you need and the dictionary is less portable. We might let the seemingly irrefutable numerical betterness of A overwhelm what might actually be more relevant, the torn cover.

Sellers could take advantage of the bias of joint evaluation by emphasizing information that consumers might think is important but actually isn’t–our computer screen has 1.073 billion color combinations while our competitors only has 16.7 million–while making less salient 6 hours of battery life versus 8 which may in practice be more important.

Personally, I’d go for full information and trust myself to figure out what is truly important but maybe that is my bias. See the paper for more examples and thought-experiments.

Simplifiers vs. constructors in science

Simplifiers give one a better overall picture of how the world works, whereas constructors are trying to build something.  The balance seems to be shifting, for instance in physics:

Within the Physics label…we find the simplifiers dominated three quarters of the Nobel Prizes from 1952 to 1981, but more recently constructors have edged the balance with more than half of those from 1982 to 2011.

There is also a shift toward constructors in chemistry, though it is less abrupt.  In the fields of physiology and medicine, however, simplifiers reign supreme and there has been no shift across time.  Three-quarters of the prizes are still going to simplifiers.

Does that mean we should be relatively bullish about progress in those areas, based on forthcoming fundamental breakthroughs?

All these points are from Jeremy J. Baumberg’s new and interesting The Secret Life of Science: How It Really Works and Why It Matters.

Genes->Education->Social Mobility

Tens of thousands of studies correlate family socioeconomic status with later child outcomes like income, wealth and attainment and then claim the correlation is causal. Very few such studies control for genetics, although twin adoption studies suggest that genetics is important. Cheap genomic scanning, however, has made it possible to go beyond twin studies. A new paper, for example, looks at differences in education-associated genes between non-identical twins raised in the same family and they find that children with more education-associated genes tend to have greater educational attainment and higher income later in life. In other words, differences in child outcomes both across families and within the same family are in part driven by genetics.

Surprisingly, however, the authors also find evidence for “genetic nurture” the idea that parental genes drive child environment which drives outcomes. That’s surprising because it’s hard to find strong evidence for big environmental effects in adoption studies but here the authors can rely on more precise data. Specifically, the authors look at maternal education-associated genes that are NOT passed on to the children and yet they find that such genes are also correlated with important child outcomes (fyi, they only have maternal genes). So smart parents benefit children twice. First by passing on smart genes and second–even when they do not pass on smart genes–by passing on a smart environment. Previous studies missed the latter effect perhaps because they focused on rich parents rather than smart parents (the former being easier to measure). The authors suggest that by looking at how smart parents help kids without smart genes we may be able to figure out smart environments and generalize them to everyone. That strikes me as optimistic.

Here is the paper abstract:

A summary genetic measure, called a “polygenic score,” derived from a genome-wide association study (GWAS) of education can modestly predict a person’s educational and economic success. This prediction could signal a biological mechanism: Education-linked genetics could encode characteristics that help people get ahead in life. Alternatively, prediction could reflect social history: People from well-off families might stay well-off for social reasons, and these families might also look alike genetically. A key test to distinguish biological mechanism from social history is if people with higher education polygenic scores tend to climb the social ladder beyond their parents’ position. Upward mobility would indicate education-linked genetics encodes characteristics that foster success. We tested if education-linked polygenic scores predicted social mobility in >20,000 individuals in five longitudinal studies in the United States, Britain, and New Zealand. Participants with higher polygenic scores achieved more education and career success and accumulated more wealth. However, they also tended to come from better-off families. In the key test, participants with higher polygenic scores tended to be upwardly mobile compared with their parents. Moreover, in sibling-difference analysis, the sibling with the higher polygenic score was more upwardly mobile. Thus, education GWAS discoveries are not mere correlates of privilege; they influence social mobility within a life. Additional analyses revealed that a mother’s polygenic score predicted her child’s attainment over and above the child’s own polygenic score, suggesting parents’ genetics can also affect their children’s attainment through environmental pathways. Education GWAS discoveries affect socioeconomic attainment through influence on individuals’ family-of-origin environments and their social mobility.

You can find the appendix with the key results here. I find the lab style difficult to follow. The authors run regressions, for example, but you won’t find a regression equation followed by a table with all the results. Instead the regression is described in the appendix and then some coefficients, but by no means all, are presented later in the appendix.

Lower travel costs boost scientific collaboration

Here is a kind of gravity equation for science:

We develop a simple theoretical framework for thinking about how geographic frictions, and in particular travel costs, shape scientists’ collaboration decisions and the types of projects that are developed locally versus over distance. We then take advantage of a quasi-experiment – the introduction of new routes by a low-cost airline – to test the predictions of the theory. Results show that travel costs constitute an important friction to collaboration: after a low-cost airline enters, the number of collaborations increases by 50%, a result that is robust to multiple falsification tests and causal in nature. The reduction in geographic frictions is particularly beneficial for high quality scientists that are otherwise embedded in worse local environments. Consistent with the theory, lower travel costs also endogenously change the types of projects scientists engage in at different levels of distance. After the shock, we observe an increase in higher quality and novel projects, as well as projects that take advantage of complementary knowledge and skills between sub-fields, and that rely on specialized equipment. We test the generalizability of our findings from chemistry to a broader dataset of scientific publications, and to a different field where specialized equipment is less likely to be relevant, mathematics. Last, we discuss implications for the formation of collaborative R&D teams over distance.

That is from a new paper by Christian Catalini, Christian Fons-Rosen, and Patrick Gaulé.

Do better scientists smile more?

Theory and research indicates that individuals with more frequent positive emotions are better at attaining goals at work and in everyday life. In the current study we examined whether the expression of genuine positive emotions by scientists was positively correlated with work-related accomplishments, defined by bibliometric (e.g. number of citations) and sociometric (number of followers for scholarly updates) indices. Using a sample of 440 scientists from a social networking site for researchers, multiple raters coded smile intensity (full smile, partial smile, or no smile) in publicly available photographs. We found that scientists who presented a full smile had the same quantity of publications yet of higher quality (e.g. citations per paper) and attracted more followers to their updates compared to less positive emotionally expressive peers; results remained after controlling for age and sex. Thin-slicing approaches to the beneficial effects of positive emotionality offer an ecologically valid approach to complement experimental and longitudinal evidence. Evidence linking positive emotional expressions to scientific impact and social influence provides further support for broaden and build models of positive emotions.

I wonder for which fields this might not be true…?

The paper has many authors, including my colleague Todd B. Kashdan.  Via the excellent Kevin Lewis.

If technology has arrived everywhere, why has income diverged?

That is the topic of a new paper by Diego Comin and Martí Mestieri, published in AEJ: Macroeconomics, here is the abstract:

We study the cross-country evolution of technology diffusion over the last two centuries. We document that adoption lags between poor and rich countries have converged, while the intensity of use of adopted technologies of poor countries relative to rich countries has diverged. The evolution of aggregate productivity implied by these trends in technology diffusion resembles the actual evolution of the world income distribution in the last two centuries. Cross-country differences in adoption lags account for a significant part of the cross-country income divergence in the nineteenth century. The divergence in intensity of use accounts for the divergence during the twentieth century.

I am struck by the strength of the two major stylized facts in this paper.  The mean adoption lag for spindles, classified as a 1779 technology, was 130 years, or in other words that is how long it took for the technology to move to poorer countries.  For ships, listed as a 1788 technology, the mean lag is 110 years.  Synthetic fiber is a 1931 technology, with a mean adoption lag of 29 years.  For the internet, a 1983 technology (is that right?), the mean adoption lag is only 6 years.

But the overall story is not so simple.  The more advanced countries use more of these technologies, and use them more effectively (“intensity”), and that gap has been growing over time.  Yes, Ghana has the internet, but it is Silicon Valley that is working wonders with it.  Some technology use begs more technology use.

If you calibrate those parameters properly, it turns out you can explain about 3/4 of the evolution of income divergence across rich and poor countries.

*Three Identical Strangers*

Few movies serve up more social science.  Imagine three identical triplets, separated at a young age, and then reared separately in a poor family, in a middle class family, and in a well-off family.  I can’t say much more without spoiling it all, but I’ll offer these points: listen closely, don’t take the apparent conclusion at face value, ponder the Pareto principle throughout, read up on “the control premium,” solve for how niche strategies change with the comparative statics (don’t forget Girard), and are they still guinea pigs?  Excellent NYC cameos from the 1980s, and see Project Nim once you are done.

Definitely recommended, and I say don’t read any other reviews before going (they are mostly strongly positive).

Spiders Can Fly!

Spiders can fly. Here’s the story from an excellent piece by Ed Yong in The Atlantic.

Spiders have no wings, but they can take to the air nonetheless. They’ll climb to an exposed point, raise their abdomens to the sky, extrude strands of silk, and float away. This behavior is called ballooning. It might carry spiders away from predators and competitors, or toward new lands with abundant resources. But whatever the reason for it, it’s clearly an effective means of travel. Spiders have been found two-and-a-half miles up in the air, and 1,000 miles out to sea.

That part has long been known (although it was news to me). What is new is evidence about how spiders fly, electrostatic energy!

Erica Morley and Daniel Robert have an explanation. The duo, who work at the University of Bristol, has shown that spiders can sense the Earth’s electric field, and use it to launch themselves into the air.

Every day, around 40,000 thunderstorms crackle around the world, collectively turning Earth’s atmosphere into a giant electrical circuit. The upper reaches of the atmosphere have a positive charge, and the planet’s surface has a negative one. Even on sunny days with cloudless skies, the air carries a voltage of around 100 volts for every meter above the ground. In foggy or stormy conditions, that gradient might increase to tens of thousands of volts per meter.

Ballooning spiders operate within this planetary electric field. When their silk leaves their bodies, it typically picks up a negative charge. This repels the similar negative charges on the surfaces on which the spiders sit, creating enough force to lift them into the air. And spiders can increase those forces by climbing onto twigs, leaves, or blades of grass. Plants, being earthed, have the same negative charge as the ground that they grow upon, but they protrude into the positively charged air. This creates substantial electric fields between the air around them and the tips of their leaves and branches—and the spiders ballooning from those tips.

…Morley and Robert have tested it with actual spiders.

First, they showed that spiders can detect electric fields. They put the arachnids on vertical strips of cardboard in the center of a plastic box, and then generated electric fields between the floor and ceiling of similar strengths to what the spiders would experience outdoors. These fields ruffled tiny sensory hairs on the spiders’ feet, known as trichobothria. “It’s like when you rub a balloon and hold it up to your hairs,” Morley says.

In response, the spiders performed a set of movements called tiptoeing—they stood on the ends of their legs and stuck their abdomens in the air. “That behavior is only ever seen before ballooning,” says Morley. Many of the spiders actually managed to take off, despite being in closed boxes with no airflow within them. And when Morley turned off the electric fields inside the boxes, the ballooning spiders dropped.

Amazing. Hat tip: The Browser. Here’s a cool video from a different research team showing a spider taking to the sky.

What should I ask Michael Pollan?

I will be doing a Conversation with Tyler with him, no associated public event.  Here is his home page, and the About section.  Here is Wikipedia on Pollan.  Here is a Sean Iling Vox interview with Pollan, on his recent work on LSD and other psychedelics, and his most recent book is How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence.  Pollan is perhaps best known for his books on food, cooking, and food supply chains. 

So what should I ask him?

Why Sexism and Racism Never Diminish–Even When Everyone Becomes Less Sexist and Racist

The idea that concepts depend on their reference class isn’t new. A short basketball player is tall and a poor American is rich. One might have thought, however, that a blue dot is a blue dot. Blue can be defined by wavelength so unlike a relative concept like short or rich there is some objective reality behind blue even if the boundaries are vague. Nevertheless, in a thought-provoking new paper in Science the all-star team of Levari, Gilbert, Wilson, Sievers, Amodio and Wheatley show that what we identify as blue expands as the prevalence of blue decreases.

In the figure below, for example, the authors ask respondents to identify a dot as blue or purple. The figure on the left shows that as the objective shading increases from very purple to very blue more people identify the dot as blue, just as one would expect. (The initial and final 200 trials indicate that there is no tendency for changes over time.) In the figure at right, however, blue dots were made less prevalent in the final 200 trials and, after the decrease in the prevalence, the tendency to identify a dot as blue increases dramatically. In the decreasing prevalence condition on the right, a dot that previously was previously identified as blue only 25% of the time now becomes identified as blue 50% of the time! (Read upwards from the horizontal axis and compare the yellow and blue prediction lines).

Clever. But so what? What the authors then go on to show, however, is that the same phenomena happens with complex concepts for which we arguably would like to have a consistent and constant identification.

Are people susceptible to prevalence-induced concept change? To answer this question, we showed participants in seven studies a series of stimuli and asked them to determine whether each stimulus was or was not an instance of a concept. The concepts ranged from simple (“Is this dot blue?”) to complex (“Is this research proposal ethical?”). After participants did this for a while, we changed the prevalence of the concept’s instances and then measured whether the concept had expanded—that is, whether it had come to include instances that it had previously excluded.

…When blue dots became rare, purple dots began to look blue; when threatening faces became rare, neutral faces began to appear threatening; and when unethical research proposals became rare, ambiguous research proposals began to seem unethical. This happened even when the change in the prevalence of instances was abrupt, even when participants were explicitly told that the prevalence of instances would change, and even when participants were instructed and paid to ignore these changes.

Assuming the result replicates (the authors have 7 studies which appear to me to be independent, although each study is fairly small in size (20-100) and drawn from Harvard undergrads) it has many implications.

in 1960, Websters dictionary defined aggressionas an unprovoked attack or invasion,but today that concept can include behaviors such as making insufficient eye contact or asking people where they are from. Many other concepts, such as abuse, bullying, mental disorder, trauma, addiction, and prejudice, have expanded of late as well

… Many organizations and institutions are dedicated to identifying and reducing the prevalence of social problems, from unethical research to unwarranted aggressions. But our studies suggest that even well-meaning agents may sometimes fail to recognize the success of their own efforts, simply because they view each new instance in the decreasingly problematic context that they themselves have brought about. Although modern societies have made extraordinary progress in solving a wide range of social problems, from poverty and illiteracy to violence and infant mortality, the majority of people believe that the world is getting worse. The fact that concepts grow larger when their instances grow smaller may be one source of that pessimism.

The paper also gives us a way of thinking more clearly about shifts in the Overton window. When strong sexism declines, for example, the Overton window shrinks on one end and expands on the other so that what was once not considered sexism at all (e.g. “men and women have different preferences which might explain job choice“) now becomes violently sexist.

Nicholas Christakis and the fearless Gabriel Rossman point out on twitter (see at right) that it works the other way as well. Namely, the presence of extremes can help others near the middle by widening the set of issues that can be discussed or studied without fear of opprobrium.

But why shouldn’t our standards change over time? Most of the people in the 1850s who thought slavery was an abomination would have rejected the idea of inter-racial marriage. Wife beating wasn’t considered a violent crime in just the very recent past. What racism and sexism mean has changed over time. Are these examples of concept creep or progress? I’d argue progress but the blue dot experiment of Levari et al. suggests that if even objective concepts morph under prevalence inducement then subjective concepts surely will. The issue then is not to prevent progress but to recognize it and not be fooled into thinking that progress hasn’t been made just because our identifications have changed.

Computational complexity and time travel

I’ve already put this Scott Aaronson paper in Assorted Links, but here are two passages I liked in particular:

…finding a fixed point might require Nature to solve an astronomically-hard computational problem! To illustrate, consider a science-fiction scenario wherein you go back in time and dictate Shakespeare’s plays to him. Shakespeare thanks you for saving him the effort, publishes verbatim the plays that you dictated, and centuries later the plays come down to you, whereupon you go back in time and dictate them to Shakespeare, etc. Notice that, in contrast to the grandfather paradox, here there is no logical contradiction: the story as we told it is entirely consistent. But most people find the story “paradoxical” anyway. After all, somehow Hamlet gets written, without anyone ever doing the work of writing it! As Deutsch perceptively observed, if there is a “paradox” here, then it is not one of logic but of computational complexity…

And:

Now, some people have asked how such a claim could possibly be consistent with modern physics. For didn’t Einstein teach us that space and time are merely two aspects of the same structure? One immediate answer is that, even within relativity theory, space and time are not interchangeable: space has a positive signature whereas time has a negative signature. In complexity theory, the difference between space and time manifests itself in the straightforward fact that you can reuse the same memory cells over and over, but you can’t reuse the same moments of time.

Yet, as trivial as that observation sounds, it leads to an interesting thought. Suppose that the laws of physics let us travel backwards in time. In such a case, it’s natural to imagine that time would become a “reusable resource” just like space is—and that, as a result, arbitrary PSPACE computations would fall within our grasp. But is that just an idle speculation, or can we rigorously justify it?

It is in general quite an interesting paper.

The origins of WEIRD psychology

This is one of the most important topics, right?  Well, here is a new and quite thorough paper by Jonathan Schulz, Duman Bahrami-Rad, Jonathan Beauchamp, and Joseph Henrich.  Here is the abstract:

Recent research not only confirms the existence of substantial psychological variation around the globe but also highlights the peculiarity of populations that are Western, Educated, Industrialized, Rich and Democratic (WEIRD). We propose that much of this variation arose as people psychologically adapted to differing kin-based institutions—the set of social norms governing descent, marriage, residence and related domains. We further propose that part of the variation in these institutions arose historically from the Catholic Church’s marriage and family policies, which contributed to the dissolution of Europe’s traditional kin-based institutions, leading eventually to the predominance of nuclear families and impersonal institutions. By combining data on 20 psychological outcomes with historical measures of both kinship and Church exposure, we find support for these ideas in a comprehensive array of analyses across countries, among European regions and between individuals with different cultural backgrounds.

As you might expect, a paper like this is fairly qualitative by its nature, and this one will not convince everybody.  Who can separate out all those causal pathways?  Even in a paper that is basically a short book.

Object all you want, but there is some chance that this is one of the half dozen most important social science and/or history papers ever written.  So maybe a few of you should read it.

And the print in the references to the supplementary materials is small, so maybe I missed it, but I don’t think there is any citation to Steve Sailer, who has been pushing a version of this idea for many years.

What are the best analyses of small, innovative, productive groups?

Shane emails me:

Hello!

What have you found to be the best books on small, innovative, productive groups?

These could be in-depth looks at specific groups – such as The Idea Factory, about Bell Labs – or they could be larger studies of institutions, guilds, etc.

I suggest reading about musical groups and sports teams and revolutions in the visual arts, as I have mentioned before, taking care you are familiar with and indeed care passionately about the underlying area in question.  Navy Seals are another possible option for a topic area.  In sociology there is network theory, but…I don’t know.  In any case, the key is to pick an area you care about, and read in clusters, rather than hoping to find “the very best book.”  The very theory of small groups predicts this is how you should read about small groups!

But if you must start somewhere, Randall Collins’s The Sociology of Philosophies is probably the most intensive and detailed place to start, too much for some in fact and arguably the book strains too hard at its target.

I have a few observations on what I call “small group theory”:

1. If you are seeking to understand a person you meet, or might be hiring, ask what was the dominant small group that shaped the thinking and ideas of that person, typically (but not always) at a young age.  Step #1 is often “what kind of regional thinker is he/she?” and step #2 is this.

2. If you are seeking to foment change, take care to bring together people who have a relatively good chance of forming a small group together.  Perhaps small groups of this kind are the fundamental units of social change, noting that often the small groups will be found within larger organizations.  The returns to “person A meeting person B” arguably are underrated, and perhaps more philanthropy should be aimed toward this end.

3. Small groups (potentially) have the speed and power to learn from members and to iterate quickly and improve their ideas and base all of those processes upon trust.  These groups also have low overhead and low communications overhead.  Small groups also insulate their members sufficiently from a possibly stifling mainstream consensus, while the multiplicity of group members simultaneously boosts the chances of drawing in potential ideas and corrections from the broader social milieu.

4. The bizarre and the offensive have a chance to flourish in small groups.  In a sense, the logic behind an “in joke” resembles the logic behind social change through small groups.  The “in joke” creates something new, and the small group can create something additionally new and in a broader and socially more significant context, but based on the same logic as what is standing behind the in joke.

5. How large is a small group anyway?  (How many people can “get” an inside joke?)  Has the internet made “small groups” larger?  Or possibly smaller?  (If there are more common memes shared by a few thousand people, perhaps the small group needs to be organized around something truly exclusive and thus somewhat narrower than in times past?)

6. Can a spousal or spouse-like couple be such a small group?  A family (Bach, Euler)?

7. What are the negative social externalities of such small groups, compared to alternative ways of generating and evaluating ideas?  And how often in life should you attempt to switch your small groups?

8. What else should we be asking about small groups and the small groups theory of social change?

9. What does your small group have to say about this?

I thank an anonymous correspondent — who adheres to the small group theory — for contributions to this post.

What should I ask Claire Lehmann?

I will be doing a Conversation with her (no associated public event), if you don’t already know here is Wikipedia on Claire:

Claire Lehmann is an Australian psychologist, writer, and the founding editor of Quillette.

Lehmann founded Quillette in October 2015, with the goal of publishing intellectually rigorous material that makes arguments or presents data not in keeping with the contemporary intellectual consensus.

Here is Claire on Twitter.  Here is her own home page and bio.  Here is the Quillette Patreon page.

So what should I ask her?