Category: Science

Fast Grants, a project of Emergent Ventures against Covid-19, status update

As you may recall, the goal of Fast Grants is to support biomedical research to fight back Covid-19, thus restoring prosperity and liberty.

Yesterday 40 awards were made, totaling about $7 million, and money is already going out the door with ongoing transfers today.  Winners are from MIT, Harvard, Stanford, Rockefeller University, UCSF, UC Berkeley, Yale, Oxford, and other locales of note.  The applications are of remarkably high quality.

Nearly 4000 applications have been turned down, and many others are being put in touch with other institutions for possible funding support, with that ancillary number set to top $5 million.

The project was announced April 8, 2020, only eight days ago.  And Fast Grants was conceived of only about a week before that, and with zero dedicated funding at the time.

I wish to thank everyone who has worked so hard to make this a reality, including the very generous donors to the program, those at Stripe who contributed by writing new software, the quality-conscious and conscientious referees and academic panel members (about twenty of them), and my co-workers at Mercatus at George Mason University, which is home to Emergent Ventures.

I hope soon to give you an update on some of the supported projects.

Emergent Ventures Covid-19 prizes, second cohort

There is another round of prize winners, and I am pleased and honored to announce them:

1. Petr Ludwig.

Petr has been instrumental in building out the #Masks4All movement, and in persuading individuals in the Czech Republic, and in turn the world, to wear masks.  That already has saved numerous lives and made possible — whenever the time is right — an eventual reopening of economies.  And I am pleased to see this movement is now having an impact in the United States.

Here is Petr on Twitter, here is the viral video he had a hand in creating and promoting, his work has been truly impressive, and I also would like to offer praise and recognition to all of the people who have worked with him.

2. www.covid19india.org/

The covid19india project is a website for tracking the progress of Covid-19 cases through India, and it is the result of a collaboration.

It is based on a large volunteer group that is rapidly aggregating and verifying patient-level data by crowdsourcing.They portray a website for tracking the progress of Covid-19 cases through India and open-sources all the (non-personally identifiable) data for researchers and analysts to consume. The data for the react based website and the cluster graph are a crowdsourced Google Sheet filled in by a large and hardworking Ops team at covid19india. They manually fill in each case, from various news sources, as soon as the case is reported. Top contributor amongst 100 odd other code contributors and the maintainer of the website is Jeremy Philemon, an undergraduate at SUNY Binghamton, majoring in Computer Science. Another interesting contribution is from Somesh Kar, a 15 year old high school student at Delhi Public School RK Puram, New Delhi. For the COVID-19 India tracker he worked on the code for the cluster graph. He is interested in computer science tech entrepreneurship and is a designer and developer in his free time. Somesh was joined in this effort by his brother, Sibesh Kar, a tech entrepreneur in New Delhi and the founder of MayaHQ.

3. Debes Christiansen, the head of department at the National Reference Laboratory for Fish and Animal Diseases in the capital, Tórshavn, Faroe Islands.

Here is the story of Debes Christiansen.  Here is one part:

A scientist who adapted his veterinary lab to test for disease among humans rather than salmon is being celebrated for helping the Faroe Islands avoid coronavirus deaths, where a larger proportion of the population has been tested than anywhere in the world.

Debes was prescient in understanding the import of testing, and also in realizing in January that he needed to move quickly.

Please note that I am trying to reach Debes Christiansen — can anyone please help me in this endeavor with an email?

Here is the list of the first cohort of winners, here is the original prize announcement.  Most of the prize money still remains open to be won.  It is worth noting that the winners so far are taking the money and plowing it back into their ongoing and still very valuable work.

An econometrician on the SEIRD epidemiological model for Covid-19

There is a new paper by Ivan Korolev:

This paper studies the SEIRD epidemic model for COVID-19. First, I show that the model is poorly identified from the observed number of deaths and confirmed cases. There are many sets of parameters that are observationally equivalent in the short run but lead to markedly different long run forecasts. Second, I demonstrate using the data from Iceland that auxiliary information from random tests can be used to calibrate the initial parameters of the model and reduce the range of possible forecasts about the future number of deaths. Finally, I show that the basic reproduction number R0 can be identified from the data, conditional on the clinical parameters. I then estimate it for the US and several other countries, allowing for possible underreporting of the number of cases. The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries. I demonstrate that if one fails to take underreporting into account and estimates R0 from the cases data, the resulting estimate of R0 will be biased downward and the model will fail to fit the observed data.

Here is the full paper.  And here is Ivan’s brief supplemental note on CFR.  (By the way, here is a new and related Anthony Atkeson paper on estimating the fatality rate.)

And here is a further paper on the IMHE model, by statisticians from CTDS, Northwestern University and the University of Texas, excerpt from the opener:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

Again, I am very happy to present counter evidence to these arguments.  I readily admit this is outside my area of expertise, but I have read through the paper and it is not much more than a few pages of recording numbers and comparing them to the actual outcomes (you will note the model predicts New York fairly well, and thus the predictions are of a “train wreck” nature).

Let me just repeat the two central findings again:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

So now really is the time to be asking tough questions about epidemiology, and yes, epidemiologists.  I would very gladly publish and “signal boost” the best positive response possible.

And just to be clear (again), I fully support current lockdown efforts (best choice until we have more data and also a better theory), I don’t want Fauci to be fired, and I don’t think economists are necessarily better forecasters.  I do feel I am not getting straight answers.

From my email, a note about epidemiology

This is all from my correspondent, I won’t do any further indentation and I have removed some identifying information, here goes:

“First, some background on who I am.  After taking degrees in math and civil engineering at [very very good school], I studied infectious disease epidemiology at [another very, very good school] because I thought it would make for a fulfilling career.  However, I became disillusioned with the enterprise for three reasons:

  1. Data is limited and often inaccurate in the critical forecasting window, leading to very large confidence bands for predictions
  2. Unless the disease has been seen before, the underlying dynamics may be sufficiently vague to make your predictions totally useless if you do not correctly specify the model structure
  3. Modeling is secondary to the governmental response (e.g., effective contact tracing) and individual action (e.g., social distancing, wearing masks)

Now I work as a quantitative analyst for [very, very good firm], and I don’t regret leaving epidemiology behind.  Anyway, on to your questions…

What is an epidemiologist’s pay structure?

The vast majority of trained epidemiologists who would have the necessary knowledge to build models are employed in academia or the public sector; so their pay is generally average/below average for what you would expect in the private sector for the same quantitative skill set.  So, aside from reputational enhancement/degradation, there’s not much of an incentive to produce accurate epidemic forecasts – at least not in monetary terms.  Presumably there is better money to be made running clinical trials for drug companies.

On your question about hiring, I can’t say how meritocratic the labor market is for quantitative modelers.  I can say though that there is no central lodestar, like Navier-Stokes in fluid dynamics, that guides the modeling framework.  True, SIR, SEIR, and other compartmental models are widely used and accepted; however, the innovations attached to them can be numerous in a way that does not suggest parsimony.

How smart are epidemiologists?

The quantitative modelers are generally much smarter than the people performing contact tracing or qualitative epidemiology studies.  However, if I’m being completely honest, their intelligence is probably lower than the average engineering professor – and certainly below that of mathematicians and statisticians.

My GRE scores were very good, and I found epidemiology to be a very interesting subject – plus, I can be pretty oblivious to what other people think.  Yet when I told several of my professors in math and engineering of my plans, it was hard for me to miss their looks of disappointment.  It’s just not a track that driven, intelligent people with a hint of quantitative ability take.

What is the political orientation of epidemiologists?  What is their social welfare function?

Left, left, left.  In the United States, I would be shocked if more than 2-5% of epidemiologists voted for Republicans in 2016 – at least among academics.  At [aforementioned very very good school], I’d be surprised if the number was 1%.  I remember the various unprompted bashing of Trump and generic Republicans on political matters unrelated to epidemiology in at least four classes during the 2016-17 academic year.  Add that to the (literal) days of mourning after the election, it’s fair to say that academic epidemiologists are pretty solidly in the left-wing camp. (Note: I didn’t vote for Trump or any other Republican in 2016 or 2018)

I was pleasantly surprised during my time at [very, very good school] that there was at least some discussion of cost-benefit analysis for public health actions, including quarantine procedures.  Realistically though, there’s a dominant strain of thought that the economic costs of an action are secondary to stopping the spread of an epidemic.  To summarize the SWF: damn the torpedoes, full steam ahead!

Do epidemiologists perform uncertainty quantification?

They seem to play around with tools like the ensemble Kalman filter (found in weather forecasting) and stochastic differential equations, but it’s fair to say that mechanical engineers are much better at accounting for uncertainty (especially in parameters and boundary conditions) in their simulations than epidemiologists.  By extension, that probably means that econometricians are better too.”

TC again: I am happy to pass along other well-thought out perspectives on this matter, and I would like to hear a more positive take.  Please note I am not endorsing these (or subsequent) observations, I genuinely do not know, and I will repeat I do not think economists are likely better.  It simply seems to me that “who are these epidemiologists anyway?” is a question now worth addressing, and hardly anyone is willing to do that.

As an opening gambit, I’d like to propose that we pay epidemiologists more.  (And one of my correspondents points out they are too often paid on “soft money.”)  I know, I know, this plays with your mood affiliation.  You would like to find a way of endorsing that conclusion, without simultaneously admitting that right now maybe the quality isn’t quite high enough.

Epidemiology and selection problems and further heterogeneities

Richard Lowery emails me this:

I saw your post about epidemiologists today.  I have a concern similar to point 4 about selection based what I have seen being used for policy in Austin.  It looks to me like the models being used for projection calibrate R_0 off of the initial doubling rate of the outbreak in an area.  But, if people who are most likely to spread to a large number of people are also more likely to get infected early in an outbreak, you end up with what looks kind of like a classic Heckman selection problem, right? In any observable group, there is going to be an unobserved distribution of contact frequency, and it would seem potentially first order to account for that.

As far as I can tell, if this criticism holds, the models are going to (1) be biased upward, predicting a far higher peak in the absence of policy intervention and (2) overstate the likely severity of an outcome without policy intervention, while potentially understating the value of aggressive containment measures.  The epidemiology models I have seen look really pessimistic, and they seem like they can only justify any intervention by arguing that the health sector will be overwhelmed, which now appears unlikely in a lot of places.  The Austin report did a trick of cutting off the time axis to hide that total infections do not seem to change that much under the different social distancing policies; everything just gets dragged out.

But, if the selection concern is right, the pessimism might be misplaced if the late epidemic R_0 is lower, potentially leading to a much lower effective spread rate and the possibility of killing the thing off at some point before it infects the number of people required to create the level of immunity the models are predicted require.  This seems feasible based on South Korea and maybe China, at least for areas in the US that are not already out of control.

I do not know the answers to the questions raised here, but I do see the debate on Twitter becoming more partisan, more emotional, and less substantive.  You cannot say that about this communication.  From the MR comments this one — from Kronrad — struck me as significant:

One thing both economists and epidemiologists seem to be lacking is an awareness for the problems of aggregation. Most models in both fields see the population as one homogenous mass of individuals. But sometimes, individual variation makes a difference in the aggregate, even if the average is the same.

In the case of pandemics, it makes a big difference how that infection rate varies in the population. Most models assume that it is the same for everyone. But in reality, human interactions are not evenly distributed. Some people shake hands all day, while others spend their days mostly alone in front of a screen. This uneven distribution has an interesting effect: those who spread virus the most are also the most likely to get it. This means that the infection rate looks very higher in the beginning of a pandemic, but sinks once the super spreaders has the disease and got immunity. Also, it means herd immunity is reached much earlier: not after 70% of the population is immune, but after people who are involved in 70% of all human interactions are immune. At average, this is the same. But in practice, it can make a big difference.

I did a small simulation on this and came to the conclusion that with recursively applied Pareto-distribution where 1/3 of all people are responsible for 2/3 of all human interaction, herd immunity is already reached when 10% of the population had the virus. So individual variation in the infection rate can make an enormous difference that are be captured in aggregate models.

My quick and dirty simulation can be found here:
https://github.com/meisserecon/corona

See also Robin Hanson’s earlier post on variation in R0.  C’mon people, stop your yapping on Twitter and write some decent blog posts on these issues.  I know you can do it.

What does this economist think of epidemiologists?

I have had fringe contact with more epidemiology than usual as of late, for obvious reasons, and I do understand this is only one corner of the discipline.  I don’t mean this as a complaint dump, because most of economics suffers from similar problems, but here are a few limitations I see in the mainline epidemiological models put before us:

1. They do not sufficiently grasp that long-run elasticities of adjustment are more powerful than short-run elasticites.  In the short run you socially distance, but in the long run you learn which methods of social distance protect you the most.  Or you move from doing “half home delivery of food” to “full home delivery of food” once you get that extra credit card or learn the best sites.  In this regard the epidemiological models end up being too pessimistic, and it seems that “the natural disaster economist complaints about the epidemiologists” (yes there is such a thing) are largely correct on this count.  On this question economic models really do better, though not the models of everybody.

2. They do not sufficiently incorporate public choice considerations.  An epidemic path, for instance, may be politically infeasible, which leads to adjustments along the way, and very often those adjustments are stupid policy moves from impatient politicians.  This is not built into the models I am seeing, nor are such factors built into most economic macro models, even though there is a large independent branch of public choice research.  It is hard to integrate.  Still, it means that epidemiological models will be too optimistic, rather than too pessimistic as in #1.  Epidemiologists might protest that it is not the purpose of their science or models to incorporate politics, but these factors are relevant for prediction, and if you try to wash your hands of them (no pun intended) you will be wrong a lot.

3. The Lucas critique, namely that agents within a model, knowing the model, will change how the model itself operates.  Epidemiologists seem super-aware of this, much more than Keynesian macroeconomists are these days, though it seems to be more of a “I told you that you should listen to us” embodiment than trying to find an actual closed-loop solution for the model as a whole.  That is really hard, either in macroeconomics or epidemiology.  Still, on the predictive front without a good instantiation of the Lucas critique again a lot will go askew, as indeed it does in economics.

The epidemiological models also do not seem to incorporate Sam Peltzman-like risk offset effects.  If you tell everyone to wear a mask, great!  But people will feel safer as a result, and end up going out more.  Some of the initial safety gains are given back through the subsequent behavioral adjustment.  Epidemiologists might claim these factors already are incorporated in the variables they are measuring, but they are not constant across all possible methods of safety improvement.  Ideally you may wish to make people safer in a not entirely transparent manner, so that they do not respond with greater recklessness.  I have not yet seen a Straussian dimension in the models, though you might argue many epidemiologists are “naive Straussian” in their public rhetoric, saying what is good for us rather than telling the whole truth.  The Straussian economists are slightly subtler.

4. Selection bias from the failures coming first.  The early models were calibrated from Wuhan data, because what else could they do?  Then came northern Italy, which was also a mess.  It is the messes which are visible first, at least on average.  So some of the models may have been too pessimistic at first.  These days we have Germany, Australia, and a bunch of southern states that haven’t quite “blown up” as quickly as they should have.  If the early models had access to all of that data, presumably they would be more predictive of the entire situation today.  But it is no accident that the failures will be more visible early on.

And note that right now some of the very worst countries (Mexico, Brazil, possibly India?) are not far enough along on the data side to yield useful inputs into the models.  So currently those models might be picking up too many semi-positive data points and not enough from the “train wrecks,” and thus they are too optimistic.

On this list, I think my #1 comes closest to being an actual criticism, the other points are more like observations about doing science in a messy, imperfect world.  In any case, when epidemiological models are brandished, keep these limitations in mind.  But the more important point may be for when critics of epidemiological models raise the limitations of those models.  Very often the cited criticisms are chosen selectively, to support some particular agenda, when in fact the biases in the epidemiological models could run in either an optimistic or pessimistic direction.

Which is how it should be.

Now, to close, I have a few rude questions that nobody else seems willing to ask, and I genuinely do not know the answers to these:

a. As a class of scientists, how much are epidemiologists paid?  Is good or bad news better for their salaries?

b. How smart are they?  What are their average GRE scores?

c. Are they hired into thick, liquid academic and institutional markets?  And how meritocratic are those markets?

d. What is their overall track record on predictions, whether before or during this crisis?

e. On average, what is the political orientation of epidemiologists?  And compared to other academics?  Which social welfare function do they use when they make non-trivial recommendations?

f. We know, from economics, that if you are a French economist, being a Frenchman predicts your political views better than does being an economist (there is an old MR post on this somewhere).  Is there a comparable phenomenon in epidemiology?

g. How well do they understand how to model uncertainty of forecasts, relative to say what a top econometrician would know?

h. Are there “zombie epidemiologists” in the manner that Paul Krugman charges there are “zombie economists”?  If so, what do you have to do to earn that designation?  And are the zombies sometimes right, or right on some issues?  How meta-rational are those who allege zombie-ism?

i. How many of them have studied Philip Tetlock’s work on forecasting?

Just to be clear, as MR readers will know, I have not been criticizing the mainstream epidemiological recommendations of lockdowns.  But still those seem to be questions worth asking.

What should I ask Adam Tooze?

I will be doing a Conversation with him, no associated public event.  He has been tweeting about the risks of a financial crisis during Covid-19, but more generally he is one of the most influential historians, currently being a Professor at Columbia University.  His previous books cover German economic history, German statistical history, the financial crisis of 2008, and most generally early to mid-20th century European history.  Here is his home page, here is his bio, here is his Wikipedia page.

So what should I ask him?

Fast Grants against Covid-19, an extension of Emergent Ventures

Emergent Ventures, a project of the Mercatus Center at George Mason University, is leading a new “Fast Grants” program to support research to fight Covid-19.  Here is the bottom line:

Science funding mechanisms are too slow in normal times and may be much too slow during the COVID-19 pandemic. Fast Grants are an effort to correct this.

If you are a scientist at an academic institution currently working on a COVID-19 related project and in need of funding, we invite you to apply for a Fast Grant. Fast grants are $10k to $500k and decisions are made in under 48 hours. If we approve the grant, you’ll receive payment as quickly as your university can receive it.

More than $10 million in support is available in total, and that is in addition to earlier funds raised to support prizes.  The application site has further detail and explains the process and motivation.

I very much wish to thank John Collison, Patrick Collison, Paul Graham, Reid Hoffman, Fiona McKean and Tobias Lütke, Yuri and Julia Milner, and Chris and Crystal Sacca for their generous support of this initiative, and I am honored to be a part of it.

Meanwhile, elsewhere in the world (FT):

The president of the European Research Council — the EU’s top scientist — has resigned after failing to persuade Brussels to set up a large-scale scientific programme to fight Covid-19.

In contrast:

During World War II, the NDRC accomplished a lot of research very quickly. In his memoir, Vannevar Bush recounts: “Within a week NDRC could review the project. The next day the director could authorize, the business office could send out a letter of intent, and the actual work could start.” Fast Grants are an effort to unlock progress at a cadence similar to that which served us well then.

We are not able at this time to process small donations for this project, but if If you are an interested donor please reach out to [email protected].

Emergent Ventures winners, eighth cohort

Eibhlin Lim, Penang and University of Chicago.

“I interview founders from different industries and around the globe and share their origin stories to inspire the next generation of founders to reach for their own dreams. I previously shared these stories in Phoenix Newsletters, an online newsletter that organically grew to serve more than 7000 high school and university student subscribers primarily from Malaysia. In July 2018, I decided to self-publish and distribute a book, ‘The Phoenix Perspective’, which contains some of the most loved stories from Phoenix Newsletters, after learning that some of our biggest fans did not have constant access to the Internet and went through great lengths to read the stories. With the help of founders and organizations, I managed to bring this book to these youths and also 1000+ other youths from 20+ countries around the globe. I hope to be able to continue interviewing founders and share their origin stories, on a new website, to reach even more future founders from around the world.”

Carole Treston/Association of Nurses in AIDS Care

To jump-start a Covid-19 program to produce cheap informational videos and distribute them to their nurse network for better information and greater safety, including for patients.

Kyle Redelinghuys

“Right now, the main sources of data for Coronavirus are CSV files and websites which make the data fairly inaccessible to work with for developers. By giving easy access to this data more products can be built and more information can be shared. The API I built is an easily accessible, single source of Coronavirus data to enable developers to build new products based on COVID19 data. These products could be mobile applications, web applications and graphed data…The API exposes this data in JSON which is the easiest data format to work with for web and mobile developers. This in turn allows for quick integration in to any products. The API is also completely free to users.”

Seyone Chithrananda

17 year old from Ontario, wishes to work in San Francisco, he does computational biology with possible application to Covid-19 as well, Twitter here.  His Project De Novo uses molecular machine learning methods for novel small molecule discovery, and the grant will be used to scale up the cloud computing infrastructure and purchase chemical modelling software.

Joshua Broggi, Woolf University

To build an on-line university to bring learning programs to the entire world, including to businesses but by no means only.  His background is in philosophy and German thought, and now he is seeking to change the world.

Congratulations!

There is also another winner, but the nature of that person’s job means that reporting must be postponed.

Here are previous Emergent Ventures winners, here is an early post on the philosophy of Emergent Ventures.  You will note that the Covid-19-related work here is simply winning regular EV grants, these are not the prizes I outlined a short while ago.  I expect more prize winners to be announced fairly soon.

Pooling to multiply SARS-CoV-2 testing throughput

Here is an email from Kevin Patrick Mahaffey, and I would like to hear your views on whether this makes sense:

One question I don’t hear being asked: Can we use pooling to repeatedly test the entire labor force at low cost with limited SARS-CoV-2 testing supplies?

Pooling is a technique used elsewhere in pathogen detection where multiple samples (e.g. nasal swabs) are combined (perhaps after the RNA extraction step of RT-qPCR) and run as one assay. A negative result confirms no infection of the entire pool, but a positive result indicates “one or more of the pool is infected.” If this is the case, then each individual in the pool can receive their own test (or, if we’re getting fancy [read: probably too hard to implement in the real world], perform an efficient search of the space using sub-pools).

To me, at least, the key questions seem to be:

– Are current assays sensitive enough to work? Technion researchers report yes in a pool as large as 60.

– Can we align limiting factors in testing cost/velocity with pooled steps? For example, if nasal swabs are the limiting reagent, then pooling doesn’t help; however if PCR primers and probes are limiting it’s great.
– Can we get a regulatory allowance for this? Perhaps the hardest step.

Example (readers, please check my back-of-the-envelope math): If we assume base infection rate of the population is 1%, then pooling of 11 samples has a ~10% chance of coming out positive. If you run all positive pools through individual assays, the expected number of tests per person is 0.196 or a 5.1x multiple on testing throughput (and a 5.1x reduction in cost). This is a big deal.

If we look at this from the view of whole-population biosurveillance after the outbreak period is over and we have a 0.1% base infection rate, pools of 32 samples have an expected number of tests per person at 0.0628 or a 15.9x multiple on throughput/cost reduction.

Putting prices on this, an initial whole-US screen at 1% rate would require about 64M tests. Afterward, performing periodic biosurveillance to find hot spots requires about 21M tests per whole-population screen. At $10/assay (what some folks working on in-field RT-qPCR tests believe marginal cost could be), this is orders of magnitude less expensive than mitigations that deal with a closed economy for any extended period of time.

I’m neither a policy nor medical expert, so perhaps I’m missing something big here. Is there really $20 on the ground or [something something] efficient market?

By the way, Iceland is testing many people and trying to build up representative samples.

The three ideas you all are writing me the most about

1. Segregating old people, and letting others go about their regular business.  Given how many older people now work (and vote), and how many employees in nursing homes are young, I’ve yet to see a good version of this plan, but if you favor it please do try to write one up.  One of you suggested taking everyone over the age of 65 and encasing them in bubble wrap, or something.

2. Tracking and surveillance by smart phones.  Here is one story, here is another.  Here is an Oxford project.  Singapore is using related ideas, China has too.

3. Testing as many Americans as possible, or at least a representative sample, to get data.

I hope to analyze these more in the future.

*The Origins of You: How Childhood Shapes Later Life*

That is the new forthcoming book by Jay Belsky, Avshalom Caspi, Terrie E. Moffitt, and Richie Poulton, which will prove one of the best and most important works of the last few years.  Imagine following one thousand or so Dunedin New Zealanders for decades of their lives, up through age 38, and recording extensive data, and then doing the same for one thousand or so British twins through age 20, and 1500 American children, in fifteen different locales, up through age 15.  Just imagine what you would learn!

You merely have to buy this book.  In the meantime, let me give you just a few of the results.

The traits of being “undercontrolled” or “inhibited,” as a toddler are the traits most likely to persist up through age eighteen.  The undercontrolled tend to end up as danger-seeking or impulsive.  Those same individuals were most likely to have gambling disorders at age 32.  Girls with an undercontrolled temperament, however, ran into much less later danger than did the boys, including for gambling.

“Social and economic wealth accumulated by the fourth decade of life also proved to be related to childhood self-control.”  And yes that is with controls, including for childhood social class.

Being formally diagnosed with ADHD in childhood was statistically unrelated to being so diagnosed later in adult life.  It did, however, predict elevated levels of “hyperactivity, inattentiveness, and impulsivity” later in adulthoood.  I suspect that all reflects more poorly on the diagnoses than on the concept.  By the way, decades later three-quarters of parents did not even remember their children receiving ADHD diagnoses, or exhibiting symptoms of ADHD (!).

Parenting styles are intergenerationally transmitted for mothers but not for fathers.

For one case the authors were able to measure for DNA and still they found that parenting styles affected the development of the children (p.104).

As for the effects of day care, it seems what matters for the mother-child relationship is the quantity of time spent by the mother taking care of the child, not the quality (p.166).  For the intellectual development of the child, however, quality time matters not the quantity.  By age four and a half, however, the children who spent more time in day care were more disobedient and aggressive.  At least on average, those problems persist through the teen years.  The good news is that quality of family environment growing up still matters more than day care.

But yet there is so much more!  I have only scratched the surface of this fascinating book.  I will not here betray the results on the effects of neighborhoods on children, for instance, among numerous other topics and questions.  Or how about bullying?  Early and persistent marijuana use?  (Uh-oh)  And what do we know about polygenic scores and career success?  What can we learn about epigenetics by considering differential victimization of twins?  What in youth predicts later telomere erosion?

I would describe the writing style as “clear and factual, but not entertaining.”

You can pre-order it here, one of the books of the year and maybe more, recommended of course.