Here is an email from Kevin Patrick Mahaffey, and I would like to hear your views on whether this makes sense:
One question I don’t hear being asked: Can we use pooling to repeatedly test the entire labor force at low cost with limited SARS-CoV-2 testing supplies?
Pooling is a technique used elsewhere in pathogen detection where multiple samples (e.g. nasal swabs) are combined (perhaps after the RNA extraction step of RT-qPCR) and run as one assay. A negative result confirms no infection of the entire pool, but a positive result indicates “one or more of the pool is infected.” If this is the case, then each individual in the pool can receive their own test (or, if we’re getting fancy [read: probably too hard to implement in the real world], perform an efficient search of the space using sub-pools).
To me, at least, the key questions seem to be:
– Are current assays sensitive enough to work? Technion researchers report yes in a pool as large as 60.
– Can we align limiting factors in testing cost/velocity with pooled steps? For example, if nasal swabs are the limiting reagent, then pooling doesn’t help; however if PCR primers and probes are limiting it’s great.
– Can we get a regulatory allowance for this? Perhaps the hardest step.
Example (readers, please check my back-of-the-envelope math): If we assume base infection rate of the population is 1%, then pooling of 11 samples has a ~10% chance of coming out positive. If you run all positive pools through individual assays, the expected number of tests per person is 0.196 or a 5.1x multiple on testing throughput (and a 5.1x reduction in cost). This is a big deal.
If we look at this from the view of whole-population biosurveillance after the outbreak period is over and we have a 0.1% base infection rate, pools of 32 samples have an expected number of tests per person at 0.0628 or a 15.9x multiple on throughput/cost reduction.
Putting prices on this, an initial whole-US screen at 1% rate would require about 64M tests. Afterward, performing periodic biosurveillance to find hot spots requires about 21M tests per whole-population screen. At $10/assay (what some folks working on in-field RT-qPCR tests believe marginal cost could be), this is orders of magnitude less expensive than mitigations that deal with a closed economy for any extended period of time.
I’m neither a policy nor medical expert, so perhaps I’m missing something big here. Is there really $20 on the ground or [something something] efficient market?
By the way, Iceland is testing many people and trying to build up representative samples.
1. Segregating old people, and letting others go about their regular business. Given how many older people now work (and vote), and how many employees in nursing homes are young, I’ve yet to see a good version of this plan, but if you favor it please do try to write one up. One of you suggested taking everyone over the age of 65 and encasing them in bubble wrap, or something.
3. Testing as many Americans as possible, or at least a representative sample, to get data.
I hope to analyze these more in the future.
And, despite not knowing what threat the SETREP-ID would be enacted for, the group had pre-emptive ethical clearance to immediately gather samples from patients – something which would take weeks or months in other countries.
It is believed that this has saved thousands of lives, here is the full story, via Rohan Claffy.
That is the new forthcoming book by Jay Belsky, Avshalom Caspi, Terrie E. Moffitt, and Richie Poulton, which will prove one of the best and most important works of the last few years. Imagine following one thousand or so Dunedin New Zealanders for decades of their lives, up through age 38, and recording extensive data, and then doing the same for one thousand or so British twins through age 20, and 1500 American children, in fifteen different locales, up through age 15. Just imagine what you would learn!
You merely have to buy this book. In the meantime, let me give you just a few of the results.
The traits of being “undercontrolled” or “inhibited,” as a toddler are the traits most likely to persist up through age eighteen. The undercontrolled tend to end up as danger-seeking or impulsive. Those same individuals were most likely to have gambling disorders at age 32. Girls with an undercontrolled temperament, however, ran into much less later danger than did the boys, including for gambling.
“Social and economic wealth accumulated by the fourth decade of life also proved to be related to childhood self-control.” And yes that is with controls, including for childhood social class.
Being formally diagnosed with ADHD in childhood was statistically unrelated to being so diagnosed later in adult life. It did, however, predict elevated levels of “hyperactivity, inattentiveness, and impulsivity” later in adulthoood. I suspect that all reflects more poorly on the diagnoses than on the concept. By the way, decades later three-quarters of parents did not even remember their children receiving ADHD diagnoses, or exhibiting symptoms of ADHD (!).
Parenting styles are intergenerationally transmitted for mothers but not for fathers.
For one case the authors were able to measure for DNA and still they found that parenting styles affected the development of the children (p.104).
As for the effects of day care, it seems what matters for the mother-child relationship is the quantity of time spent by the mother taking care of the child, not the quality (p.166). For the intellectual development of the child, however, quality time matters not the quantity. By age four and a half, however, the children who spent more time in day care were more disobedient and aggressive. At least on average, those problems persist through the teen years. The good news is that quality of family environment growing up still matters more than day care.
But yet there is so much more! I have only scratched the surface of this fascinating book. I will not here betray the results on the effects of neighborhoods on children, for instance, among numerous other topics and questions. Or how about bullying? Early and persistent marijuana use? (Uh-oh) And what do we know about polygenic scores and career success? What can we learn about epigenetics by considering differential victimization of twins? What in youth predicts later telomere erosion?
I would describe the writing style as “clear and factual, but not entertaining.”
You can pre-order it here, one of the books of the year and maybe more, recommended of course.
Nicholas Whitaker of Brown, general career development grant in the area of Progress Studies.
Coleman Hughes, travel and career development grant.
Michael T. Foster, career development grant to study machine learning to predict which politicians will succeed and advance their careers.
John Strider, a Progress Studies grant on how to reinvent the integrated corporate research lab.
Dryden Brown, to help build institutions and a financial center in Ghana, through his company Bluebook Cities.
Adaobi Adibe, to restructure credentialing, and build infrastructure for a more meritocratic world, helping workers create property rights in the evaluation of their own talent.
Jassi Pannu, medical student at Stanford, to study best policy responses to pandemics.
Vasco Queirós, for his work on a Twitter browser app for superior threading and on-line communication.
Chris, a loyal MR reader, writes to me:
I’ve been turning to your insights on prizes vs. grants over the years. Your Google talk from 2007 is without question the best discussion I’ve found of their respective merits…I was wondering if your thinking on prizes vs. grants has evolved, and in particular [TC has added the numbers here]:
1. In the Google talk, you talked about an equilibrium in which there would be a growing ecosystem of big prizes complementing one another. I’m not sure it has turned out this way. Do you agree, and what happened? Did the “failure” of some high profile prizes (e.g. the Google Lunar XPrize) dampen down the enthusiasm?
2. More generally, there seemed to be an expectation in the 2000s and early 2010s that prizes would take off and become a more significant feature of the R&D funding landscape. Again, I don’t think that has really happened. What explains that?
3. Looking specifically at government funding of R&D, do you think there is an equilibrium in which grants can coexist with prizes? Or do grants squeeze out prizes through some form of adverse selection (the best researchers opting for grants over prizes)?
4. How important do you think public choice reasons are for us being in a grant-dominated equilibrium? It seems that the science sector has done a great job of positioning itself as something other than an interest group, with its interests squarely aligned with the public good. (Even suggesting that the science sector is also an interest group seems slightly heretical. It’s interesting that Dominic Cummings, for all his radicalism, seems to see little need for any reform of the science/research ecosystem beyond ARPA).
First a general remark: I now see the current scientific (and cultural) establishment as having more implicit prizes than I used to realize. In fact, getting a grant is one of the biggest prizes you can receive, if the grant is sufficiently prestigious. By an “implicit prizes,” I mean a prize where the target achievement is not quite spelled out, but if “we” (however defined) judge you to have achieved enough, we will pour grants, status, and high quality social networks into your lap. For instance, Alex and I have received significant “prizes” for writing MR, although none of those prizes have names or bring explicit public recognition, as opposed to general recognition. We have in contrast never received a grant to write MR, so are prizes really so under-provided?
So my current thinking is a bit less “grants vs. prizes,” and somewhat more “implicit prizes vs. explicit prizes, each combined with grants to varying degrees.” Implicit prizes are more flexible, but they also are easier to cheat with, since the standard of achievement is never quite clear. Implicit prizes also are much more valuable to people who can use, build, and exploit their social networks, and of course that is not everyone (but shouldn’t we be giving more prizes to those people?). Implicit prizes also can be revoked through subsequent loss of status. Implicit prizes are more likely “granted” by the hands of social networks rather than judging panels, all of those features being both cost and benefit.
Now to the specific points:
1. As the venture capital ecosystem grows, and as the value of publicity rises (it is easier to monetize scientific and other sources of fame), and there are more “influencers in the broad sense,” there are more implicit prizes to be had. And did the Lunar XPrize fail? If an end is not worth accomplishing, a prize is one way to find that out.
2. In addition to my point about the proliferation of implicit prizes, the scientific, academic, and political communities are far too conservative in the literal sense of that word. How many top schools experiment with different tenure procedures? Different ways of running a department? It is sad how difficult it is to experiment with changes in academia and science, whether the topic be prizes or not.
3. The best researchers get both grants and prizes (one hopes).
By the way, here is a recent piece on the empirics of prizes, mostly positive results.
A torrent of data is being released daily by preprint servers that didn’t even exist a decade ago, then dissected on platforms such as Slack and Twitter, and in the media, before formal peer review begins. Journal staffers are working overtime to get manuscripts reviewed, edited, and published at record speeds. The venerable New England Journal of Medicine (NEJM) posted one COVID-19 paper within 48 hours of submission. Viral genomes posted on a platform named GISAID, more than 200 so far, are analyzed instantaneously by a phalanx of evolutionary biologists who share their phylogenetic trees in preprints and on social media.
“This is a very different experience from any outbreak that I’ve been a part of,” says epidemiologist Marc Lipsitch of the Harvard T.H. Chan School of Public Health. The intense communication has catalyzed an unusual level of collaboration among scientists that, combined with scientific advances, has enabled research to move faster than during any previous outbreak. “An unprecedented amount of knowledge has been generated in 6 weeks,” says Jeremy Farrar, head of the Wellcome Trust…
The COVID-19 outbreak has broken that mold. Early this week, more than 283 papers had already appeared on preprint repositories (see graphic, below), compared with 261 published in journals. Two of the largest biomedical preprint servers, bioRxiv and medRxiv, “are currently getting around 10 papers each day on some aspect of the novel coronavirus,” says John Inglis, head of Cold Spring Harbor Laboratory Press, which runs both servers. The deluge “has been a challenge for our small teams … [they] are working evenings and weekends.”
We demonstrate empirically that measures of novelty are correlated with but distinct from measures of scientific impact, which suggests that if also novelty metrics were utilized in scientist evaluation, scientists might pursue more innovative, riskier, projects.
That is from Jay Bhattacharya and Mikko Packalen in a new NBER working paper and scientific innovation and stagnation.
They point out that Eugene Garfield, the scientist behind the development of citation count, did not think it should be used to evaluate individual scientists. Overall, citations encourage too much work in crowded, “approaching peak” areas, rather than developing new ideas. In lieu of citations, the authors suggest using textual analysis to determine how much a paper is building on new ideas rather than on already intensively explored ideas.
We have the transcript live on our Day One Project site: https://www.dayoneproject.org/cowen-kalil-transcript
Here was the video version, with some sound imperfections. And from Schmidt Futures:
…some context on the broader event is here, along with details on our open call for innovation, science, and tech policy ideas to inform the priorities of the next presidential term – your community undoubtedly would have great contributions. We are accepting submissions of these ideas through the Day One Accelerator until March 1.
I am very much looking to my Schmidt Futures event coming up this March.
Here is the transcript and audio, here is part of the summary:
Tim joined Tyler to discuss the role of popular economics in a politicized world, the puzzling polarization behind Brexit, why good feedback is necessary (and rare), the limits of fact-checking, the “tremendously British” encouragement he received from Prince Charles, playing poker with Steve Levitt, messiness in music, the underrated aspect of formal debate, whether introverts are better at public speaking, the three things he can’t live without, and more.
Here is one bit near the opening:
COWEN: These are all easy questions. Let’s think about public speaking, which you’ve done quite a bit of. On average, do you think extroverts or introverts are better public speakers?
HARFORD: I am an introvert. I’ve never seen any research into this, so it should be something that one could test empirically. But as an introvert, I love public speaking because I like being alone, and you’re never more alone than when you’re on the stage. No one is going to bother you when you’re up there. I find it a great way to interact with people because they don’t talk back.
COWEN: What other non-obvious traits do you think predict being good at public speaking?
HARFORD: Hmmm. You need to be willing to rehearse and also willing to improvise and make stuff up as you go along. And I think it’s hard for somebody to be willing to do both. I think the people who like to rehearse end up rehearsing too much and being too stiff and not being willing to adapt to circumstances, whereas the people who are happy to improvise don’t rehearse enough, and so their comments are ill formed and ill considered. You need that capacity to do both.
And another segment:
HARFORD: …Brian Eno actually asked me a slightly different question, which I found interesting, which was, “If you were transported back in time to the year 700, what piece of technology would you take — or knowledge or whatever — what would you take with you from the present day that would lead people to think that you were useful, but would also not cause you to be burned as a witch?”
COWEN: A hat, perhaps.
HARFORD: A hat?
COWEN: If it’s the British Isles.
HARFORD: Well, a hat is useful. I suggested the Langstroth beehive. The Langstroth beehive was invented in about 1850. It’s an enormously important technology in the domestication of bees. It’s a vast improvement on pre-Langstroth beehives, vast improvement on medieval beehives. Yet, it’s fairly straightforward to make and to explain to people how it works and why it works. I think people would appreciate it, and everybody likes honey, and people have valued bees for a long time. So that would have been my answer.
COWEN: I’ve read all of your books. I’ve read close to all of your columns, maybe all of them in fact, and I’m going to ask you a question I also asked Reid Hoffman. You know the truths of economics, plenty of empirical papers. Why aren’t you weirder? I’ve read things by you that I disagreed with, but I’ve never once read anything by you that I thought was outrageous. Why aren’t you weirder?
The conversation has many fine segments, definitely recommended, Tim was in top form. I very much enjoyed our “Brexit debate” as well, too long to reproduce here, but I made what I thought was the best case for Brexit possible and Tim responded.
A new study compares Hebrew-speaking with some Arabic-speaking communities, here is the abstract:
In the past three decades in high‐income countries, female students have outperformed male students in most indicators of educational attainment. However, the underrepresentation of girls and women in science courses and careers, especially in physics, computer sciences, and engineering, remains persistent. What is often neglected by the vast existing literature is the role that schools, as social institutions, play in maintaining or eliminating such gender gaps. This explorative case study research compares two high schools in Israel: one Hebrew‐speaking state school that serves mostly middleclass students and exhibits a typical gender gap in physics and computer science; the other, an Arabic‐speaking state school located in a Bedouin town that serves mostly students from a lower socioeconomic background. In the Arabic‐speaking school over 50% of the students in the advanced physics and computer science classes are females. The study aims to explain this seemingly counterintuitive gender pattern with respect to participation in physics and computer science. A comparison of school policies regarding sorting and choice reveals that the two schools employ very different policies that might explain the different patterns of participation. The Hebrew‐speaking school prioritizes self‐fulfillment and “free‐choice,” while in the Arabic‐speaking school, staff are much more active in sorting and assigning students to different curricular programs. The qualitative analysis suggests that in the case of the Arabic‐speaking school the intersection between traditional and collectivist society and neoliberal pressures in the form of raising achievement benchmarks contributes to the reversal of the gender gap in physics and computer science courses.
The article is “Explaining a reverse gender gap in advanced physics and computer science course‐taking: An exploratory case study comparing Hebrew‐speaking and Arabic‐speaking high schools in Israel” by Halleli Pinson, Yariv Feniger, and Yael Barak.
Via the excellent Kevin Lewis.
That is the new David Attenborough BBC nature show, available on streaming or buy the discs from the UK. Believe it or not it has better footage than the earlier BBC nature shows, while remaining inside the basic template of what such shows attempt to accomplish. Here is a very good Guardian review. Here is a somewhat snotty NYT review, bemoaning Attenborough’s tone of “polite optimism.” Strongly recommended.
By Ronald S. Calinger, what a beautiful book, clearly written, conceptual in nature, placing Euler in the broader history of mathematics, the funding of science, and the Enlightenment, all in a mere 536 pp. of text. Here is one bit:
At midcentury Leonard Euler was at the peak of his career. Johann I (Jean I) Bernoulli had saluted him as “the incomparable L. Euler, the prince among mathematicians” in 1745, and Henri Poincaré’s later description of him as the “god of mathematics” attests to his supremacy in the mathematical sciences. Euler continued to center his research on making seminal contributions to differential and integral calculus and rational mechanics, and producing substantial advances in astronomy, hydrodynamics, and geometrical optics; the state projects of Frederick II required attention especially to hydraulics, cartography, lotteries, and turbines. At midcentury, when d’Alembert and Alexis Claude Clairaut in Paris, Euler in Berlin, Colin Maclaurin in Scotland, and Daniel Bernoulli in Basel dominated the physical sciences, Euler was their presiding genius.
Nor had I known that Rameau sent his treatise on the fundamental mathematics of music to Euler for comments.
Definitely recommended, you can order it here.
Mathis Lohaus writes to me:
Thanks for doing the Conversations. I greatly enjoyed Acemoglu, Duflo, and Banerjee in short succession after the Christmas break. Your question about “top-5 journals” and the bits about graduate training reminded of something I’ve had on my mind for a while now:
For the average PhD student, how hard is it to become a tenured economist — compared to 10, 20, 30, 40 … years ago? (And how about someone in the top 10% of talent/grit?)
Publication requirements have clearly become tougher in absolute terms. But how difficult is it to write a few “very good” papers in the first place? On twitter, people will sometimes say things like “oh, it must have been nice to get tenure back in 1997 based on 1 top article, which in turn was based on a simple regression with n = 60”. I wonder if that criticism is fair, because I imagine the learning curve for quantitative methods must have been challenging. And what about the formal models etc.? Surely those were always hard. (I vaguely remember a photo showing difficult comp exam questions…)
More broadly, early career scholars now have tons of data and inspiring research at their fingertips all the time. Also, nepotism and discrimination might be less powerful than in earlier decades…? On the other hand, you have to take into account that many more PhDs are awarded than ever before. I suspect that alone is a huge factor, but perhaps less acute if we focus only on people who “really, really want to stay in academia”.
A different way to ask the question: When would have been the best point in time to try to become an econ professor (in the USA)?
I would love to hear about your thoughts, and/or input from MR readers.
I always enjoy questions that somewhat answer themselves. I would add these points:
1. The skills of networking and finding new data sets are increasingly important, all-important you might say, at least for those in the top tier of ability/effort.
2. Fundraising matters more too, because the project might cost a lot, RCTs being the extreme case here.
3. Managing your research team matters much more, and the average size of research team for influential work is much larger. Once upon a time, three authors on a paper was considered slightly weird (the claim was one of them virtually always did nothing), now four is quite normal and the background research support is much higher as well.
Recently I was speaking to someone on the job market, wondering if he should be an academic. I said: “In the old days you spent a higher percentage of your time doing economics. Nowadays, you spend a higher percentage of your time managing a research team doing economics. You hardly do economics at all. So if you are mainly going to be a manager, why not manage for the higher rather than the lower salary?”
That was tongue in cheek of course.
On the bright side, learning today through the internet is so much easier. For instance, I find YouTube a good way to learn/refresh on new ideas in econometrics, easier than just trying to crack the final published paper.