This is all him, no double indent though:
“As a regular reader of your blog and one of the PIs of the Bangladesh Mask RCT (now in press at Science), I was surprised to see your claim that, “With more data transparency, it does not seem to be holding up very well”:
- The article you linked claims, in agreement with our study, that our intervention led to a roughly 10% reduction in symptomatic seropositivity (going from 12% to 41% of the population masked). Taking this estimate at face value, going from no one masked to everyone masked would imply a considerably larger effect. Additionally:
- We see a similar – but more precisely estimated – proportionate reduction in Covid symptoms [95% CI: 7-17%] (pre-registered), corresponding to ~1,500 individuals with Covid symptoms prevented
- We see larger proportionate drops in symptomatic seropositivity and Covid in villages where mask-use increased by more (not pre-registered), with the effect size roughly matching our main result
The naïve linear IV estimate would be a 33% reduction in Covid from universal masking. People underwhelmed by the absolute number of cases prevented need to ask, what did you expect if masks are as effective as the observational literature suggests? I see our results as on the low end of these estimates, and this is precisely what we powered the study to detect.
- Let’s distinguish between:
- The absolute reduction in raw consenting symptomatic seropositives (20 cases prevented)
- The absolute reduction in the proportion of consenting symptomatic seropositives (0.08 percentage points, or 105 cases prevented)
- The relative reduction in the proportion of consenting symptomatic seropositives (9.5% in cases)
Ben Recht advocates analyzing a) – the difference in means not controlling for population. This is not the specification we pre-registered, as it will have less power due to random fluctuations in population (and indeed, the difference in raw symptomatic seropositives overlooks the fact that the treatment population was larger – there are more people possibly ill!). Fixating on this specification in lieu of our pre-registered one (for which we powered the study) is reverse p-hacking.
RE: b) vs. c), we find a result of almost identical significance in a linear model, suggesting the same proportionate reduction if we divide the coefficient by the base rate. We believe the relative reduction in c) is more externally valid, as it is difficult to write down a structural pandemic model where masks lead to an absolute reduction in Covid regardless of the base rate (and the absolute number in b) is a function of the consent rate in our study).
- It is certainly true that survey response bias is a potential concern. We have repeatedly acknowledged this shortcoming of any real-world RCT evaluating masks (that respondents cannot be blinded). The direction of the bias is unclear — individuals might be more attuned to symptoms in the treatment group. We conduct many robustness checks in the paper. We have now obtained funding to replicate the entire study and collect blood spots from symptomatic and non-symptomatic individuals to partially mitigate this bias (we will still need to check for balance in blood consent rates with respect to observables, as we do in the current study).
- We do not say that surgical masks work better than cloth masks. What we say is that the evidence in favor of surgical masks is more robust. We find an effect on symptomatic seropositivity regardless of whether we drop or impute missing values for non-consenters, while the effect of cloth masks on symptomatic seropositivity depends on how we do this imputation. We find robust effects on symptoms for both types of masks.
I agree with you that our study identifies only the medium-term impact of our intervention, and there are critically important policy questions about the long-term equilibrium impact of masking, as well as how the costs and benefits scale for people of different ages and vaccination statuses.”
Great piece by David Wallace-Wells on air pollution.
Here is just a partial list of the things, short of death rates, we know are affected by air pollution. GDP, with a 10 per cent increase in pollution reducing output by almost a full percentage point, according to an OECD report last year. Cognitive performance, with a study showing that cutting Chinese pollution to the standards required in the US would improve the average student’s ranking in verbal tests by 26 per cent and in maths by 13 per cent. In Los Angeles, after $700 air purifiers were installed in schools, student performance improved almost as much as it would if class sizes were reduced by a third. Heart disease is more common in polluted air, as are many types of cancer, and acute and chronic respiratory diseases like asthma, and strokes. The incidence of Alzheimer’s can triple: in Choked, Beth Gardiner cites a study which found early markers of Alzheimer’s in 40 per cent of autopsies conducted on those in high-pollution areas and in none of those outside them. Rates of other sorts of dementia increase too, as does Parkinson’s. Air pollution has also been linked to mental illness of all kinds – with a recent paper in the British Journal of Psychiatry showing that even small increases in local pollution raise the need for treatment by a third and for hospitalisation by a fifth – and to worse memory, attention and vocabulary, as well as ADHD and autism spectrum disorders. Pollution has been shown to damage the development of neurons in the brain, and proximity to a coal plant can deform a baby’s DNA in the womb. It even accelerates the degeneration of the eyesight.
A high pollution level in the year a baby is born has been shown to result in reduced earnings and labour force participation at the age of thirty. The relationship of pollution to premature births and low birth weight is so strong that the introduction of the automatic toll system E-ZPass in American cities reduced both problems in areas close to toll plazas (by 10.8 per cent and 11.8 per cent respectively), by cutting down on the exhaust expelled when cars have to queue. Extremely premature births, another study found, were 80 per cent more likely when mothers lived in areas of heavy traffic. Women breathing exhaust fumes during pregnancy gave birth to children with higher rates of paediatric leukaemia, kidney cancer, eye tumours and malignancies in the ovaries and testes. Infant death rates increased in line with pollution levels, as did heart malformations. And those breathing dirtier air in childhood exhibited significantly higher rates of self-harm in adulthood, with an increase of just five micrograms of small particulates a day associated, in 1.4 million people in Denmark, with a 42 per cent rise in violence towards oneself. Depression in teenagers quadruples; suicide becomes more common too.
Stock market returns are lower on days with higher air pollution, a study found this year. Surgical outcomes are worse. Crime goes up with increased particulate concentrations, especially violent crime: a 10 per cent reduction in pollution, researchers at Colorado State University found, could reduce the cost of crime in the US by $1.4 billion a year. When there’s more smog in the air, chess players make more mistakes, and bigger ones. Politicians speak more simplistically, and baseball umpires make more bad calls.
As MR readers will know Tyler and I have been saying air pollution is an underrated problem for some time. Here’s my video on the topic:
- This report uses natural language processing to analyze the abstracts of successful grants from 1990 to 2020 in the seven fields of Biological Sciences, Computer & Information Science & Engineering, Education & Human Resources, Engineering, Geosciences, Mathematical & Physical Sciences, and Social, Behavioral & Economic Sciences.
- The frequency of documents containing highly politicized terms has been increasing consistently over the last three decades. As of 2020, 30.4% of all grants had one of the following politicized terms: “equity,” “diversity,” “inclusion,” “gender,” “marginalize,” “underrepresented,” or “disparity.” This is up from 2.9% in 1990. The most politicized field is Education & Human Resources (53.8% in 2020, up from 4.3% in 1990). The least are Mathematical & Physical Sciences (22.6%, up from 0.9%) and Computer & Information Science & Engineering (24.9%, up from 1.5%), although even they are significantly more politicized than any field was in 1990.
- At the same time, abstracts in most directorates have been becoming more similar to each other over time. This arguably shows that there is less diversity in the kinds of ideas that are getting funded. This effect is particularly strong in the last few years, but the trend is clear over the last three decades when a technique based on word similarity, rather than the matching of exact terms, is used.
As you may recall, each year I scour the job market websites to see what new job candidates are working on and presenting. Unlike many years, this year I did not find the top or most interesting papers at Harvard and MIT. Northwestern and UC Davis seem to be producing notable students. More broadly, interest in economic history continues to grow, and the same is true for urban, regional, and health care economics. There were fewer papers on macro than five or ten years ago, and very few on monetary economics or crypto. Theory papers are rare. Overall, the women seem to be doing more interesting work than the men. Many schools seem to be putting out fewer students than usual. University of Wisconsin at Madison was the website with (by far) the most “pronouns” listed. I fear that this year’s search was more boring than usual, at least for my tastes, due to hyperspecialization of the candidates and their research topics. Perhaps the worst offender was papers based on balkanized, non-generalizable data sources. I’ll be continuing to look at a few more sites.
Now, a new study published in Royal Society Open Science says honeybees have another defense: screaming.
More precisely, the bees in the study produced a noise known as an “antipredator pipe” — not something that comes out of their mouths, but rather a sound they produce by vibrating their wings, raising their abdomens and exposing a gland used to release a certain kind of pheromone.
Here is the full story.
One second-order effect is that countries with good infrastructure planning would reap a significant relative gain. The fast train from Paris to Nice would become faster yet, but would trains on the Acela corridor?
Next in line: Desalinating water would become cheap and easy, enabling the transformation and terraforming of many landscapes. Nevada would boom, though a vigorous environmental debate might ensue: Just how many deserts should we keep around? Over time, Mali and the Middle East would become much greener.
How about heating and cooling? It might be possible to manipulate temperatures outdoors, so Denmark in January and Dubai in August would no longer be so unbearable. It wouldn’t be too hard to melt snow or generate a cooling breeze.
Wages would also rise significantly. Not only would more goods and services be available, but the demand for labor would also skyrocket. If flying to Tokyo is easier, demand for pilots will be higher. Eventually, more flying would be automated. Robots would become far more plentiful, which would set off yet more second- and third-order effects.
Cheap energy would also make supercomputing more available, crypto more convenient, and nanotechnology more likely.
And limiting climate change would not be as simple as it might at first seem. Yes, nuclear fusion could replace all of those coal plants. But the secondary consequences do not stop there. As water desalination became more feasible, for example, irrigation would become less expensive. Many areas would be far more verdant, and people might raise more cows and eat more beef. Those cows, in turn, might release far more methane into the air, worsening one significant set of climate-related problems.
But all is not lost! Because energy would be so cheap, protective technologies — to remove methane (and carbon) from the air, for instance — are also likely to be more feasible and affordable.
In general, in a carbon-free energy world, the stakes would be higher for a large subset of decisions. If we can clean up the air, great. If not, the overall increase in radical change would create a whole host of new problems, one of which would be more methane emissions. The “race” between the destructive and restorative powers of technology would become all the more consequential. The value of high quality institutions would be much greater, which might be a worry in many parts of the world.
This is a thought exercise, and I would say you are wasting your breath if you fume against fusion power in the comments.
The evaluation and selection of novel projects lies at the heart of scientific and technological innovation, and yet there are persistent concerns about bias, such as conservatism. This paper investigates the role that the format of evaluation, specifically information sharing among expert evaluators, plays in generating conservative decisions. We executed two field experiments in two separate grant-funding opportunities at a leading research university, mobilizing 369 evaluators from seven universities to evaluate 97 projects, resulting in 761 proposal-evaluation pairs and more than $250,000 in awards. We exogenously varied the relative valence (positive and negative) of others’ scores and measured how exposures to higher and lower scores affect the focal evaluator’s propensity to change their initial score. We found causal evidence of a negativity bias, where evaluators lower their scores by more points after seeing scores more critical than their own rather than raise them after seeing more favorable scores. Qualitative coding of the evaluators’ justifications for score changes reveals that exposures to lower scores were associated with greater attention to uncovering weaknesses, whereas exposures to neutral or higher scores were associated with increased emphasis on nonevaluation criteria, such as confidence in one’s judgment. The greater power of negative information suggests that information sharing among expert evaluators can lead to more conservative allocation decisions that favor protecting against failure rather than maximizing success.
That is the title of the new Derek Thompson piece in The Atlantic. Here is one excerpt:
The existing layers of bureaucracy have obvious costs in speed. They also have subtle costs in creativity. The NIH’s pre-grant peer-review process requires that many reviewers approve of an application. This consensus-oriented style can be a check against novelty—what if one scientist sees extraordinary promise in a wacky idea but the rest of the board sees only its wackiness? The sheer amount of work required to get a grant also penalizes radical creativity. Many scientists, anticipating the turgidity and conservatism of the NIH’s approval system, apply for projects that they anticipate will appeal to the board rather than pour their energies into a truly new idea that, after a 500-day waiting period, might get rejected. This is happening in an academic industry where securing NIH funding can be make-or-break: Since the 1960s, doctoral programs have gotten longer and longer, while the share of Ph.D. holders getting tenure has declined by 40 percent.
First is the trust paradox. People in professional circles like saying that we “believe the science,” but ironically, the scientific system doesn’t seem to put much confidence in real-life scientists. In a survey of researchers who received Fast Grants, almost 80 percent said that they would change their focus “a lot” if they could deploy their grant money however they liked; more than 60 percent said they would pursue work outside their field of expertise, against the norms of the NIH. “The current grant funding apparatus does not allow some of the best scientists in the world to pursue the research agendas that they themselves think are best,” Collison, Cowen, and the UC Berkeley scientist Patrick Hsu wrote in the online publication Future in June. So major funders have placed researchers in the awkward position of being both celebrated by people who say they love the institution of science and constrained by the actual institution of science.
Much of the rest of the piece is a discussion of Fast Grants and also biomedical funding more generally.
We quantify global and regional aggregate damages from global warming of 1.5 to 4 °C above pre-industrial levels using a well-established integrated assessment model, PAGE09. We find mean global aggregate damages in 2100 of 0.29% of GDP if global warming is limited to about 1.5 °C (90% confidence interval 0.09–0.60%) and 0.40% for 2 °C (range 0.12–0.91%). These are, respectively, 92% and 89% lower than mean losses of 3.67% of GDP (range 0.64–10.77%) associated with global warming of 4 °C. The net present value of global aggregate damages for the 2008–2200 period is estimated at $48.7 trillion for ~ 1.5 °C global warming (range $13–108 trillion) and $60.7 trillion for 2 °C (range $15–140 trillion). These are, respectively, 92% and 90% lower than the mean NPV of $591.7 trillion of GDP for 4 °C warming (range $70–1920 trillion). This leads to a mean social cost of CO2 emitted in 2020 of ~ $150 for 4 °C warming as compared to $30 at ~ 1.5 °C warming. The benefits of limiting warming to 1.5 °C rather than 2 °C might be underestimated since PAGE09 is not recalibrated to reflect the recent understanding of the full range of risks at 1.5 °C warming.
That is from a new paper by R. Warren, et.al. The model does cover uncertainty, quadratic damages, and other features to steer it away from denialism. At the end of the calculation, however, for a temperature rise of three degrees Centigrade they still find a mean damage of 2% of global gdp, and a range leading up to three percent of global gdp in terms of foregone consumption. That is plausibly one year’s global growth.
If I understand them correctly, and I am not sure I do: “These give initial mean consumption discount rates of around 3% per year in developed regions and 48% [!] in developing ones.” And what are the non-initial rates? I just don’t follow the paper here, but probably I do not agree with it. Perhaps at least for the developed nations this is a useful upper bound for costs? And it is not insanely high.
Here is a piece by Johannes Ackva and John Halstead, “Good news on climate change.” Excerpt:
However, for a variety of reasons, SSP5-RCP8.5 [a kind of worst case default path] now looks increasingly unlikely as a ‘business as usual’ emissions pathway. There are several reasons for this. Firstly, the costs of renewables and batteries have declined extremely quickly. Historically, models have been too pessimistic on cost declines for solar, wind and batteries: out of nearly 3,000 Integrated Assessment Models, none projected that solar investment costs (different to the levelised costs shown below) would decline by more than 6% per year between 2010 and 2020. In fact, they declined by 15% per year.
Fundamentally, existing mainstream economic models of climate change consistently fail to model exponential cost declines, as shown on the chart below. The left pane below shows historical declines in solar costs compared to Integrated Assessment Model projections of costs. The pane on the right shows the cost of solar compared to Integrated Assessment Model assessments of ‘floor costs’ for solar – the lowest that solar could go. Real world solar prices have consistently smashed through these supposed floors.
…in order for us to follow SSP5-RCP8.5, there would have to be very fast economic growth and technological progress, but meagre progress on low carbon technologies. This does not seem very plausible. In order to reproduce SSP5-8.5 with newer models, the models had to assume that average global income per person will rise to $140,000 by 2100 and also that we would burn large amounts of coal.
And: “Global CO2 emissions have been flat for a decade, new data reveals.” Again, better than previous projections.
As I said in the title of this post, these are “Claims.” But overall I would say that the new results are slanting modestly in the less negative direction, though I am not sure that the headlines of the last two weeks are equally encouraging.
An excellent book, the author is Robert Kanigel and the subtitle is The Making of a Scientific Dynasty. It is strongest on the role of mentors and lineages in scientific excellence, the radically inegalitarian and “unfair” nature of scientific achievement and also credit, and it offers an interesting look at the early days of the NIH. Here is one excerpt:
But Brodie simply saw no reason to become an expert in an area to launch a study of it. Rather, as Sid Udenfriend says, “he would just wander into a new field and make advances that people fifteen years in the field couldn’t.” Poring through scientific journals didn’t appeal to him; picking the brains of colleagues did. “He’d go up to you,” Jack Orloff remembers, “and say, ‘Tell me what you know about X and Y.’ Sometimes he’d already know a lot, but he could come across as almost stupid.” Indeed, he could seem downright ignorant, asking disarmingly simple, even hopelessly naive questions, like a child. But as one admirer notes, “He’d end up asking just the questions you should have asked ten years ago.”
Beginning around 1955, the big stir at LCP was over serotonin. (“When the experiments were good, we called it serotonin,” Brodie would later recall…”When I heard it pronounced serotonin, I knew the experiments were bad and I stayed home.”)
Martin Zatz, a veteran of Julius Axelrod’s lab and a scientist with an uncommonly broad cast of mind, was talking about mentoring and its role in science. “Are you going to talk about the disadvantage of the mentor chain?” he asked me, smiling broadly.
What’s that? “That you don’t get anywhere,” he replied, now quite serious, “unless you’re in one.”
Recommended. Why are there not more excellent conceptual books on the history of science?
An unprecedented statement by current NASA Director and former Senator Bill Nelson. It is the most honest and forthright commentary to date on the UAP issue from a NASA Director, and perhaps the most thoughtful UAP-related statement ever made by a serving senior U.S. official: https://t.co/2RL4gBuCUD
— Christopher K. Mellon (@ChrisKMellon) October 23, 2021
And more on YouTube, for instance at 55:30.
Climate mitigation scenarios envision considerable growth of wind and solar power, but scholars disagree on how this growth compares with historical trends. Here we fit growth models to wind and solar trajectories to identify countries in which growth has already stabilized after the initial acceleration. National growth has followed S-curves to reach maximum annual rates of 0.8% (interquartile range of 0.6–1.1%) of the total electricity supply for onshore wind and 0.6% (0.4–0.9%) for solar. In comparison, one-half of 1.5 °C-compatible scenarios envision global growth of wind power above 1.3% and of solar power above 1.4%, while one-quarter of these scenarios envision global growth of solar above 3.3% per year. Replicating or exceeding the fastest national growth globally may be challenging because, so far, countries that introduced wind and solar power later have not achieved higher maximum growth rates, despite their generally speedier progression through the technology adoption cycle.
That is a new paper from Nature Energy, by Aleh Cherp, et.al., via the excellent Kevin Lewis. Yes, yes, Moore’s Law for solar cost and all that, but we need to think about the problem more deeply and that still implies a significant role for nuclear energy. And here is some good news:
Finland has joined France, Poland, Hungary, the Czech Republic in lobbying the European Union to categorize nuclear power as sustainable. According to the Finnish Broadcasting Company, Finland’s pro-nuclear lobbying marks a U-turn within the Green Party.
Are you surprised that the airport pictured below (I assure you, it is a real place) has also installed high-capacity air filters and UV sanitization?
Since the onset of COVID-19, the air-conditioning system filters across the passenger terminals have been upgraded from MERV-7-rated models to MERV-14-rated ones. These higher grade filters can effectively remove about 85 per cent of the particles of 0.3 to 1.0 micrometres in size in the air, smaller than the size of a COVID-19 particle in a respiratory droplet.
To ensure the MERV-14 rated filters continue to operate at effective efficiency, they are replaced every one to two months, depending on the condition of use. All used filters are sealed for proper disposal by maintenance workers donning the highest level of personal protective equipment (PPE) for safe handling.
In addition, fresh air intake for the air-conditioning systems have also been maximised by fully opening the dampers to admit outdoor air.
As a further layer of protection, Changi Airport is installing Ultraviolet-C (UV-C) sanitisation equipment in Air-Handling Stations (AHS) and Air-Handling Units (AHU) progressively across all terminal air-conditioning systems. The UV-C kills any remnant virus traces in the mixture of fresh and returned air passing through the cooling coil, providing a second level of defence after the MERV-14 rated filters.
Singapore will thus have air filtration and UV sanitization in the airport before we have it in the hospitals.
Is the future slipping away from the United States? It seems that way sometimes. Only the high-tech sector is keeping us afloat and, of course, that is under attack by the elites.
Hat tip: Randall Parker.
Photo Credit: Matteo Morando.