What are the best things to read to estimate what’s going to happen from here?
In particular, what is the best way to think about how to make inferences, or not, from extrapolating current trends about case and death numbers?
What is “what happens from here” going to be most sensitive to in terms of potential best remedies? Regulatory decisions of some kind? Which features of local public health infrastructure will matter the most? Will any of it matter at all?
Which variables should we focus on to best predict expected severity?
At a keynote address at the Precision Medicine World Conference, Thiel argued for enabling riskier research grant-making via institutions such as the NIH, as well as abandoning the scientific staple of the double-blind trial and encouraging the U.S. FDA to further accelerate its regulatory evaluations. He said that these deficiencies are inhibiting the ability of scientists to make major advances, despite the current environment that is flooded with capital and research talent.
Make science great again?
“There’s a story we can tell about what happened historically in how processes became bureaucratized. Early science funding was very informal – DARPA’s a little bit different – but in the 1950s and 1960s, it was very generative,” said Thiel. “You just had one person [who] knew the 20 top scientists and gave them grants – there was no up-front application process. Then gradually, as things scaled, they became formalized.
“One question is always how things scale,” he continued. “There are certain types of businesses where they work better and better at bigger and bigger scales,” he said, pointing to big tech.. “And, if big tech is an ambiguous term, I wonder whether big science is simply an oxymoron.”
He then cited the success of major scientific programs – such as the development of the atomic bomb in the Manhattan Project, the Apollo space program and Watson and Crick’s discovery of DNA – that hinged on having “preexisting, idiosyncratic, quirky, decentralized scientific culture[s]” and were accelerated rapidly by a major infusion of cash.
When I invest in biotech, I have a sort of a model for the type of person I’m looking to invest in,” said Thiel. “There’s sort of a bimodal distribution of scientists. You basically have people who are extremely conventional and will do experiments that will succeed but will not mean anything. These will not actually translate into anything significant, and you can tell that it is just a very incremental experiment. Then you have your various people who are crazy and want to do things that are [going to] make a very big difference. They’re, generally speaking, too crazy for anything to ever work.”
“You want to … find the people who are roughly halfway in between. There are fewer of those people because of … these institutional structures and whatnot, but I don’t think they’re nonexistent,” he continued. “My challenge to biotech venture capitalists is to find some of those people who are crazy enough to try something bold, but not so crazy that it’s going to be this mutation where they do 100 things differently.”
That is the topic of my latest Bloomberg column, here is one short excerpt:
…most of the vaccine-making capacity against a new virus would be concentrated in a few multinationals, and much of that activity occurs outside the U.S. If a pandemic were to become truly serious, politics might intervene and prevent the export of doses of the vaccine, no matter what the price.
The economic case for free trade is entirely sound. But here is one case where the U.S. government should take the initiative to support a domestic vaccine industry — because that trade is unlikely ever to be free.
And if you think the market will provide the solution, consider that potential suppliers may fear being hit with price caps, IP confiscations, or other after-the-fact “takings” by the U.S. government. So it is important to think now about how to create the right structures for the eventual creation of treatments and cures.
In the meantime, wash your hands! Nonetheless, so far the smart money still ought to bet that this one will evolve into less virulent forms, and it already seems that a disproportionate number of the people dying are quite old.
Here is Scott’s response to Bryan Caplan’s response to Scott’s critique of Bryan’s earlier Szaszian paper on mental illness. I can’t bring myself to do any serious recap, so I hope you care (or do I hope you don’t care?), in any case Scott serves up the links:
Bryan rejects the concept of mental illness, believing that such individuals can be described using concepts from rational choice theory, most of all preferences and meta-preferences:
…this article argues that most mental illnesses are best modeled as extreme preferences, not constraining diseases.
Most lately, here is a snippet from Scott’s latest post:
Or what about respiratory tract infections that cause coughing? My impression is that, put a gun to my head, and I could keep myself from coughing, even when I really really felt like it. Coughing is a preference, not a constraint, and Bryan, to be consistent, would have to think of respiratory infections as just a preference for coughing…
Bryan’s preference vs. constraint model doesn’t just invalidate mental illness. It invalidates many (maybe most) physical illnesses! Even the ones it doesn’t invalidate may only get saved by some triviality we don’t care about – like how maybe you can lift less weight when you have the flu – and not by the symptoms that actually bother us.
I am fully on Scott’s side here, but I think he is being too literal in responding to Bryan’s arguments, taking on too much of Bryan’s turf.
The biggest problem with Bryan’s argument is this: let’s say you could redescribe say schizophrenia in terms of an unusual preference and other concepts from rational choice theory. It would not follow that is all schizophrenia is. For instance, a quick perusal of the literature shows that schizophrenic individuals may suffer from local processing deficits (moving too rapidly and too indiscriminately to global processing), working memory defects, inability to maintain attention, disorganized behavior, hypo- and hyper-excitability, excessive speculative ideation, excess receptivity to information from the right hemisphere of the brain, and delusions.
Of course that account is contested at some margins, as is typically the case in a research literature, but you get the point. Schizophrenia could be some combination of an extreme preference, whatever else Bryan wishes to toss in, and some version of that list from the paragraph directly above. Bryan works very hard to “rule in” his redescription of various mental illnesses, but he doesn’t and indeed cannot do much to rule out what are in fact the relevant cognitive or sometimes personality traits of the phenomenon in question.
And if you ask “Ah, what about the ‘normal’ people who claim that God is talking to them?”, well most of them have only a limited number of the features on that above list. Some of course may in fact be schizophrenic or fall into the broader schizotypic category. Those supposed reductios about the supposedly wacky religious people just don’t much dent the category of schizophrenia. There might even be a correlation in the data between religious behavior and schizotypy — why not? — but the two are by no means cognitively identical.
Ask Bryan a simple question: do the individuals diagnosed as schizophrenia in fact have some combination of those traits listed above to an unusual degree? If he answers “yes,” he has in fact conceded the argument. If he answers “no,” he needs to counter a huge and established literature with empirics of his own, which of course he has not done. The broader point is you cannot usually vanquish empirical categories with philosophical and methodological arguments alone.
I do partially side with Bryan only in one regard: I don’t find the term “mental illness” very useful, and very often it is misleading, or even dangerous, or used to restrict the liberties of individuals unjustly. I very much prefer a more disaggregated approach, citing more exact information about a person’s condition, rather than applying a very general label in a manner that could end up being irresponsible. It seems to me that a more disaggregated description is almost always possible, maybe always possible.
But you shouldn’t take that brand of skepticism as endorsing the kind of mono-conceptual straitjacket Bryan wishes to impose on this whole problem.
Using current methods of inventing drugs, Borisy believes it will be possible to create new medicines that mimic the effects of existing big sellers, and bring them to market in a matter of years. Then EQRx will sell them to insurers and large hospital systems at a discount, displacing the innovators. Because its medicines will be cheaper to develop, EQRx will be able to make a handy profit despite these lower prices. The key question is whether health insurers and giant hospital systems have gotten desperate enough to want to shake up the system.
Quite simply, Borisy is going to invent and develop new drugs, and sell them for less money than the competition. He calls this “a radical proposition.” In any other sector, it would just be called “business.”
But the branded drug business is different, in part for structural reasons but also because people and doctors tend to be reticent to switch to a new medicine just because it’s cheaper. That has helped lead to dramatically higher prices.
Why, Borisy asks, have prices of, for instance, cancer drugs gone up eightfold over 20 years if the technology to make new medicines is steadily improving, and if we are, in fact, as he says, in “a golden age of biotech and pharmaceutical innovation”?
EQRx is his antidote. On Monday morning, the company is also announcing that it has raised $200 million from a bevy of top tech and biotech investors.
Over the next 10 years, Borisy said, he’d like for EQRx to start developing somewhere in the ballpark of 50 different experimental medicines. He wants the company to come out with its first medicine in five years, and to have 10 drugs within a decade.
Will this succeed? Even if so, why did it take so long for this to happen? The article offers this explanation:
There are multiple reasons creating a company like EQRx will be difficult. The idea of creating a “fast-follower” — a new drug that is much like an existing one — is anything but new. In fact, it has yielded some of the pharmaceutical industry’s biggest sellers. Lipitor followed several other cholesterol medicines to market, but became the best-selling drug in the world in the 2000s. Rheumatoid arthritis treatment Humira, the industry’s current best-seller, was introduced after two similar medicines, Remicade and Enbrel, were already on the market.
But fast-followers do not compete on price, because lowering price has not historically resulted in selling more units of a drug. Instead, the least successful medicine in a category will sometimes raise its price to make up for lost market share, and the best-sellers will often follow, raising their own prices.
That is the topic of my latest Bloomberg column, here is one excerpt:
In other words, the frontier areas for overcoming wage stagnation are several-fold. First is a greater freedom to build, so that housing supply can rise and prices can fall. That also would enable more upward mobility by easing moves to America’s more productive (but also more expensive) regions. Second are steps to lower the cost of medical care through greater competition and price transparency. Third, American higher education is hardly at its optimum point of efficiency, innovation and affordability.
If those sectors displayed some of the dynamism and innovativeness of that marks America’s tech sector, the combination of declining prices and rising quality could give living standards a boost. And since rent, health care and tuition tend to be higher shares of the incomes of poorer people, those changes would help poorer people the most.
Think of it as a rooftops piece, combined with a discussion of why wages actually have seen slow growth as of late.
In the US, the normal, oral temperature of adults is, on average, lower than the canonical 37°C established in the 19th century. We postulated that body temperature has decreased over time. Using measurements from three cohorts–the Union Army Veterans of the Civil War (N = 23,710; measurement years 1860–1940), the National Health and Nutrition Examination Survey I (N = 15,301; 1971–1975), and the Stanford Translational Research Integrated Database Environment (N = 150,280; 2007–2017)–we determined that mean body temperature in men and women, after adjusting for age, height, weight and, in some models date and time of day, has decreased monotonically by 0.03°C per birth decade. A similar decline within the Union Army cohort as between cohorts, makes measurement error an unlikely explanation. This substantive and continuing shift in body temperature—a marker for metabolic rate—provides a framework for understanding changes in human health and longevity over 157 years.
M.B. Malabu, travel grant to come to the D.C. area for helping in setting up a market-oriented think tank in Nigeria.
Nolan Gray, urban planner from NYC, to be in residence at Mercatus and write a book on YIMBY, Against Zoning.
One other, not yet ready to be announced. But a good one.
Here are previous MR posts on Emergent Ventures.
The Lancet Commission on Pollution and Health, an authoritative review with well-over a dozen distinguished co-authors, is unusually forthright on the effect of pollution, most especially lead, on IQ. I think some of their numbers, especially in paragraph three, are too large but the direction is certainly correct.
Neurotoxic pollutants can reduce productivity by impairing children’s cognitive development. It is well documented that exposures to lead and other metals (eg, mercury and arsenic) reduce cognitive function, as measured by loss of IQ.168
Loss of cognitive function directly affects success at school and labour force participation and indirectly affects lifetime earnings. In the USA, millions of children were exposed to excessive concentrations of lead as the result of the widespread use of leaded gasoline from the 1920s until about 1980. At peak use in the 1970s, annual consumption of tetraethyl lead in gasoline was nearly 100 000 tonnes.
It has been estimated that the resulting epidemic of subclinical lead poisoning could have reduced the number of children with truly superior intelligence (IQ scores higher than 130 points) by more than 50% and, concurrently, caused a more than 50% increase in the number of children with IQ scores less than 70 (figure 14).265 Children with reduced cognitive function due to lead did poorly in school, required special education and other remedial programmes, and could not contribute fully to society when they became adults.
Grosse and colleagues 46 found that each IQ point lost to neurotoxic pollution results in a decrease in mean lifetime earnings of 1·76%. Salkever and colleagues 266 who extended this analysis to include the effects of IQ on schooling, found that a decrease in IQ of one percentage point lowers mean lifetime earnings by 2·38%. Studies from the 2000s using data from the USA 267,268 support earlier findings but suggest a detrimental effect on earnings of 1·1% per IQ point.269 The link between lead exposure and reduced IQ 46, 168 suggests that, in the USA, a 1 μg/dL increase in blood lead concentration decreases mean lifetime earnings by about 0·5%. A 2015 study in Chile 270 that followed up children who were exposed to lead at contaminated sites suggests much greater effects. A 2016 analysis by Muennig 271 argues that the economic losses that result from early-life exposure to lead include not only the costs resulting from cognitive impairment but also costs that result from the subsequent increased use of the social welfare services by these lead-exposed children, and their increased likelihood of incarceration.
No, not work smart but put in what would appear to be lots of extra hours. Why not measure who submits papers to journals in the off-work hours?:
Main outcome measures Manuscript and peer review submissions on weekends, on national holidays, and by hour of day (to determine early mornings and late nights). Logistic regression was used to estimate the probability of manuscript and peer review submissions on weekends or holidays.
Results The analyses included more than 49 000 manuscript submissions and 76 000 peer reviews. Little change over time was seen in the average probability of manuscript or peer review submissions occurring on weekends or holidays. The levels of out of hours work were high, with average probabilities of 0.14 to 0.18 for work on the weekends and 0.08 to 0.13 for work on holidays compared with days in the same week. Clear and consistent differences were seen between countries. Chinese researchers most often worked at weekends and at midnight, whereas researchers in Scandinavian countries were among the most likely to submit during the week and the middle of the day.
Emphasis added. Get this, you lazy bastards:
The average probability of a manuscript being submitted at the weekend for both journals was 0.14, and for a peer review it was 0.18. Peer review submissions during holidays had average probabilities of 0.13 (The BMJ) and 0.12 (BMJ Open), which were higher than the probabilities for manuscripts of 0.08 (The BMJ) and 0.10 (BMJ Open).
For weekend paper submission, China appears to be at about 0.22, India at about 0.09, see Figure 1. France, Italy, Spain, and Brazil all submit quite late in the afternoon, often a bit after 6 p.m.
That is from a new paper by Adrian Barnett, Inger Mewburn, and Sara Schroter. They do not tell us when they submitted it, but I wrote this blog post a wee bit after 8 p.m.
Via Michelle Dawson.
S&P Global: Four Republican lawmakers have authored new legislation to permit drugs for critically ill patients to enter the market before completing late-stage trials, saying the bill was necessary because the U.S. Food and Drug Administration’s regulatory process was too slow and burdensome.
The bill would create a time-limited conditional approval pathway in the U.S. similar to a system that has long been used by European regulators.
…The conditional approval would be valid for one year and could be renewed annually for up to five years….Companies would be required to meet certain obligations, like completing clinical investigations to provide full demonstration of safety and effectiveness and other studies.
…Companies could seek full U.S. approval at any time. The FDA would be required to let manufacturers include in their applications the real-world evidence they collected during the conditional approval period.
The lawmakers want the FDA to be able to grant the limited marketing authorization to new drugs that have successfully completed phase 1 and 2 trials, with the idea that companies could generate revenue to help fund their phase 3 studies.
They emphasized their legislation is targeted especially at small biopharmaceutical companies that may struggle to cover the costs of late-stage trials.
Under the dual-track approval system, companies would be able to sell pharmaceuticals earlier but would be required to track outcomes so greater real world information would be developed in the FDA process. The result is a more dynamic approval process better suited to modern medicine. The idea is due to the excellent Bartley J. Madden (note my bias).
Madden and Nobel-prize winner Vernon Smith explained the dual-track idea, noting:
Today’s world of accelerating medical advancements is ushering in an age of personalized medicine in which patients’ unique genetic makeup and biomarkers will increasingly lead to customized therapies in which samples are inherently small. This calls for a fast-learning, adaptable FTCM environment for generating new data. In sharp contrast, the status quo FDA environment provides a yes/no approval decision based on statistical tests for an average patient, i.e., a one-size-fits-all drug approval process.
A similar process has been adopted in Japan for regenerative medicine.
In the bad old days, health care in poor countries was just terrible. It wasn’t only the poverty, lack of hospitals and pharmaceuticals, and unsanitary conditions. In addition, doctors gave very bad advice and they also didn’t work very hard, as outlined in this paper. Citizens suffered accordingly.
Those conditions have improved somewhat, but actual health outcomes have improved a lot. You still can’t trust the local medical advice in Tanzania, but guess what? You have much better vaccines, greater access to antibiotics, more NGOs running health clinics, and better health care information, sometimes through the internet. If your kid has diarrhea, let the kid drink water, even unclean water! As for antibiotics (NYT):
Two doses a year of an antibiotic can sharply cut death rates among infants in poor countries, perhaps by as much as 25 percent among the very young, researchers reported on Wednesday.
In other words, the quality of the most important part of health care treatments bypassed the rest of the problems in poor economies and grew rapidly, even in countries with only so-so economic growth. The rate of reduction in child mortality has tripled in many countries since the 1990s, and by no means are those locales major economic winners as say Singapore and South Korea were.
Therein lies one of the most important (and under-reported) global changes in the last twenty years. It is now possible to have a decent public health system in a country with poor or mediocre political and economic institutions.
In other words, public health is no longer such an O-Ring service, an O-Ring service being one where everything has to go right for the service to be of decent quality. And advances are much, much easier when the O-Ring structure no longer rules.
The O-Ring citation is to a famous Michael Kremer paper — a trip to the moon is definitely an O-Ring process, because if one step is off the whole mission probably is a failure. But tasty fish curry is not — you can get a splendid version in some pretty dumpy countries, maybe even a better version in poorer places.
Electricity, however, it seems is still an O-Ring service, as evidenced by the recent power blackouts in South Africa.
What else is likely to become less of an O-Ring good or service in the next few decades to come? And what can we do to hasten such progress? Is there any chance of quality software production making that same kind of transition? Or might some goods and services return to a greater connection with the O-Ring model?
For this post I am very much indebted to a conversation with Garett Jones.
Every year I curse the optometry racket when I run out of contact lenses and have to return to the optometrist to get a “new” prescription. It’s a service that I don’t want and don’t need but am forced to buy by US law which require patients to have a recent doctor’s prescription to buy eyewear. I can stretch out the time by buying months in advance, sometimes I buy when abroad, for a few years I managed to evade the law by buying from Canadian internet sellers but that route has mostly been shut down. Writing in the Atlantic Yascha Mounk notes that around the world no prescription is needed:
In every other country in which I’ve lived—Germany and Britain, France and Italy—it is far easier to buy glasses or contact lenses than it is here. In those countries, as in Peru, you can simply walk into an optician’s store and ask an employee to give you an eye test, likely free of charge. If you already know your strength, you can just tell them what you want. You can also buy contact lenses from the closest drugstore without having to talk to a single soul—no doctor’s prescription necessary.
The excuse for the law is that eye exams can discover other problems. Sure, trade offs are everywhere. Let people make their own decisions. as Mounk concludes:
Like the citizens of virtually every other country around the world, Americans should be allowed to buy any pair of glasses or set of contact lenses at a moment’s notice. While the requirement to get a medical exam from an optometrist who has spent a minimum of seven years in higher education may have good effects in some cases, it also creates unreasonable costs—and unjustifiable suffering….Put Americans in charge of their own vision care, and abolish mandatory eye exams.
There is increasing interest in expanding Medicare health insurance coverage in the U.S., but it is not clear whether the current program is the right foundation on which to build. Traditional Medicare covers a uniform set of benefits for all income groups and provides more generous access to providers and new treatments than public programs in other developed countries. We develop an economic framework to assess the efficiency and equity tradeoffs involved with reforming this generous, uniform structure.We argue that three major shifts make a uniform design less efficient today than when Medicare began in 1965. First, rising income inequality makes it more difficult to design a single plan that serves the needs of both higher- and lower-income people. Second, the dramatic expansion of expensive medical technology means that a generous program increasingly crowds out other public programs valued by the poor and middle class. Finally, as medical spending rises, the tax-financing of the system creates mounting economic costs and increasingly untenable policy constraints. These forces motivate reforms that shift towards a more basic public benefit that individuals can “top-up” with private spending.If combined with an increase in other progressive transfers, such a reform could improve efficiency and reduce public spending while benefiting low income populations.
That is from a new NBER working paper by Mark Shepard, Katherine Baicker, and Jonathan S. Skinner.
Video, audio, and transcript here, part of Mark’s personal challenge for the year, an excellent event all around. This will also end up as part of CWT.