Increasingly, modern Artificial Intelligence (AI) research has become more computationally intensive. However, a growing concern is that due to unequal access to computing power, only certain firms and elite universities have advantages in modern AI research. Using a novel dataset of 171394 papers from 57 prestigious computer science conferences, we document that firms, in particular, large technology firms and elite universities have increased participation in major AI conferences since deep learning’s unanticipated rise in 2012. The effect is concentrated among elite universities, which are ranked 1-50 in the QS World University Rankings. Further, we find two strategies through which firms increased their presence in AI research: first, they have increased firm-only publications; and second, firms are collaborating primarily with elite universities. Consequently, this increased presence of firms and elite universities in AI research has crowded out mid-tier (QS ranked 201-300) and lower-tier (QS ranked 301-500) universities. To provide causal evidence that deep learning’s unanticipated rise resulted in this divergence, we leverage the generalized synthetic control method, a data-driven counterfactual estimator. Using machine learning based text analysis methods, we provide additional evidence that the divergence between these two groups – large firms and non-elite universities – is driven by access to computing power or compute, which we term as the “compute divide”. This compute divide between large firms and non-elite universities increases concerns around bias and fairness within AI technology, and presents an obstacle towards “democratizing” AI. These results suggest that a lack of access to specialized equipment such as compute can de-democratize knowledge production.
That is a new paper by Nur Ahmed and Muntasir Wahed.
A surprising number of individuals responded to my post last week soliciting books about the NIH and NSF. Thank you to those who did and please do still feel free to reach out on this matter.
It became apparent that a highly complementary effort would be a Substack/blog/podcast/similar about the inner workings of the NIH / NSF, and indeed other institutions relevant to the modern-day administration and practice of science. Think SCOTUSblog or Macro Musings, but focused on the NIH/NSF/etc.
So, if you would like to start such a blog/podcast/newsletter, please email me, and that plan will be considered for financial support.
Andrew Dembe of Uganda, working on the “last mile” problem for health care delivery.
Maxwell Dostart-Meers of Harvard, to study Singapore and state capacity, as a Progress Studies fellow.
Markus Strasser of Linz, Austria, now living in London, to pursue a next-generation scientific search and discovery web interface that can answer complex quantitative questions, built on extracted relations from scientific text, such as graph of causations, effects, biomarkers, quantities, etc.
Marc Sidwell of the United Kingdom, to write a book on common sense.
Yuen Yuen Ang, political scientist at the University of Michigan, from Singapore, to write a new book on disruption.
Matthew Clancy, Iowa State University, Progress Studies fellow. To build out his newsletter on recent research on innovation.
Samarth Athreya, Ontario: “I’m a 17 year old who is incredibly passionate about the advent of biomaterials and its potential to push humanity forward in a variety of industries. I’ve been speaking about my vision and some of my research on the progress of material science and nanotechnology specifically at various events like C2 Montreal, SXSW, and Elevate Tech Festival!”
Applied Divinity Studies, this anonymously written blog has won an award for his or her writing and blogging. We are paying in bitcoin.
Jordan Mafumbo, a Ugandan autodidact and civil engineer studying Heidegger and the foundations of liberalism. He also has won an award for blogging.
UC San Francisco scientists have developed a single clinical laboratory test capable of zeroing in on the microbial miscreant afflicting a patient in as little as six hours – irrespective of what body fluid is sampled, the type or species of infectious agent, or whether physicians start out with any clue as to what the culprit may be.
The test will be a lifesaver, speeding appropriate drug treatment for the seriously ill, and should transform the way infectious diseases are diagnosed, said the authors of the study, published Nov. 9 in Nature Medicine.
The advance here is that we can detect any infection from any body fluid, without special handling or processing for each distinct body fluid,” said study corresponding author Charles Chiu, MD, PhD, a professor in the UCSF Department of Laboratory Medicine and director of the UCSF-Abbott Viral Diagnostics and Discovery Center.
In a recent paper, Bloom et al. (2020) find evidence for a substantial decline in research productivity in the U.S. economy during the last 40 years. In this paper, we replicate their findings for China and Germany, using detailed firm-level data spanning three decades. Our results indicate that diminishing returns in idea production are a global phenomenon, not just confined to the U.S.
Also known as “how to approve a vaccine and still continue with stage III trials.” From Art B. Owen and Hal Varian:
Motivated by customer loyalty plans and scholarship programs, we study tie-breaker designs which are hybrids of randomized controlled trials (RCTs) and regression discontinuity designs (RDDs). We quantify the statistical efficiency of a tie-breaker design in which a proportion Δ of observed subjects are in the RCT. In a two line regression, statistical efficiency increases monotonically with Δ, so efficiency is maximized by an RCT. We point to additional advantages of tie-breakers versus RDD: for a nonparametric regression the boundary bias is much less severe and for quadratic regression, the variance is greatly reduced. For a two line model we can quantify the short term value of the treatment allocation and this comparison favors smaller Δ with the RDD being best. We solve for the optimal tradeoff between these exploration and exploitation goals. The usual tie-breaker design applies an RCT on the middle Δ subjects as ranked by the assignment variable. We quantify the efficiency of other designs such as experimenting only in the second decile from the top. We also show that in some general parametric models a Monte Carlo evaluation can be replaced by matrix algebra.
Published version here. Whether or not you agree with that particular approach, you can view 2020 in the following terms. Public health experts have told us that:
1. We citizens have to lock down many of our schools and sometimes jobs.
2. We citizens have to significantly change many of our commercial and retail and travel habits.
3. We citizens have to significantly limit or cut off many of our contacts with other human beings.
At the same time, they also are saying that:
4. “We public health experts do not have to come up with a way of approving a vaccine and simultaneously continuing to conduct our other clinical trials.”
And they wonder why people do not have greater faith in science.
That is the title and theme of my latest Bloomberg column, here is one excerpt:
And who should get the vaccine first? The elderly are more vulnerable, but the young are more likely to spread Covid-19. Some recent results suggest it would be better to vaccinate the young first, but that is less politically likely. Again, it is easy to see potential conflicts over this question, cutting across traditional party lines.
An even more complex problem would arise if one good vaccine is available but other, possibly better, vaccines are imminent. Does everyone get the “good enough” vaccine, disrupting the ability to conduct clinical trials to see if the other vaccines are better? How much patience do Americans have, really?
Americans would probably resent having to wait. But if they end up choosing a lesser quality vaccine, over the long run they might be unhappier yet. It is not clear the U.S. public health bureaucracy is up to the task of approving one vaccine and restructuring the other trials (possibly by paying participants more to stay in, or by shifting to other countries for data) so they can continue.
Be prepared for a mess, with almost everybody unhappy.
By Rebecca Wragg Sykes, an excellent book, a very responsible treatment of what we do and do not know about Neanderthals, with a bit on Denisovans as well. It is a book full of sentences such as: “Micro-morphology has also provided proof that, far from being slovenly, Neanderthals were regularly disposing of their rubbish.” It seems they enjoyed mussels and also grubs, among many other foodstuffs. The hearth was the center of the home and they had fairly advanced systems for butchery. They used leather and deployed pigments.
I enjoyed this segment:
Parisians, Londoners or Berliners today with ostensibly European heritage have very little connection even to Mesolithic people just 10,000 years ago. The vast majority of their DNA comes from a massive influx of Western Asian peoples during the Neolithic. This means that many of the first H. sapiens populations are more extinct than the neanderthals; not a great sign of evolutionary dominance.
Recommended, you can order here.
Neurological manifestations are a significant complication of coronavirus infection disease-19 (COVID-19). Understanding how COVID-19 contributes to neurological disease is needed for appropriate treatment of infected patients, as well as in initiating relevant follow-up care after recovery. Investigation of autopsied brain tissue has been key to advancing our understanding of the neuropathogenesis of a large number of infectious and non-infectious diseases affecting the central nervous system (CNS). Due to the highly infectious nature of the etiologic agent of COVID-19, severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), there is a paucity of tissues available for comprehensive investigation. Here, we show for the first time, microhemorrhages and neuropathology that is consistent with hypoxic injury in SARS-CoV-2 infected non-human primates (NHPs). Importantly, this was seen among infected animals that did not develop severe respiratory disease. This finding underscores the importance of vaccinating against SARS-CoV-2, even among populations that have a reduced risk for developing of severe disease, to prevent long-term or permanent neurological sequelae. Sparse virus was detected in brain endothelial cells but did not associate with the severity of CNS injury. We anticipate our findings will advance our current understanding of the neuropathogenesis of SARS-CoV-2 infection and demonstrate SARS-CoV-2 infected NHPs are a highly relevant animal model for investigating COVID-19 neuropathogenesis among human subjects.
That is from new Fast Grants supported research by Tracy Fischer, et.al. And here are some related earlier results from Kabbani and Olds. Here are some more general recent results about brain damage.
How bad are these micro-hemorrhages anyway? I don’t know! You may notice I have hardly lunged at the “permanent damage” papers that have been coming out on Covid (in fact many of them already have collapsed or not replicated). But there are genuine reasons for caution, these results do not seem to be collapsing, and Covid-19 is not just a bunch of people trying to make a mountain out of a molehill. And “exposing the young” decisions should not be taken lightly either. The people who are very cautious about reopening may be too risk-averse given realistic alternatives, but they are not all just statists, Trump haters, lazy teachers’ unions, and so on. There are very genuine concerns here.
A reader writes —
“Despite being the preeminent model for global science funding, and far more powerful than any single university, the workings of the NIH or NSF are surprisingly opaque to most people. These bodies shape who becomes a scientist, what science they pursue, and how they pursue it. I would therefore like to fund a book about how the institutions of US science actually operate, how they’ve changed, what the relevant surrounding incentives are, and how it is that they should likely evolve from here. It’s possible, perhaps even very likely, that a good version of this book would be picked up by a good publisher. Even if it isn’t, it should exist in the public domain. I will invest generously in anyone who seeks to write one.”
This reader is highly credible. If you’re interested and have relevant expertise, please email me. (Suggestions for good possible authors — people who genuinely understand the system but who could be sufficiently objective and where relevant critical — are welcome although not as useful.)
There are many ways to conduct clinical trials while releasing a vaccine—indeed, we can make the clinical trials better by randomizing a phased release. Suppose we decide health care and transit workers should be vaccinated first. No problem–offer the workers the vaccine, put the SSNs of those who wants the vaccine into a hat like draft numbers, vaccine a randomly chosen sub-sample, monitor everyone.This is the well known lottery technique for measuring causal effects often used in the school choice literature. If we use this technique we can greatly increase sample sizes and as we study each wave we will gather more confidence in the data. We won’t have enough vaccine in November to vaccinate everyone or probably even all health care and transit workers so a lottery is an ethically fair as well as statistically useful way to distributed the vaccine. We can also randomize across cities and regions.
That is from a recent post by Alex Tabarrok, on the blog Marginal Revolution, and there is more at the link. Of course I don’t have to tell you what Alex’s brother thinks of all this.
Addendum: Anup Malani notes:
BTW, those worried about ethics here should note that most product markets, even many dangerous ones (including non-FDA regulated medical care) use the population testing approach. Drugs are the exception. Elsewhere handle risk via exclusively via ex post tort liability.
So please don’t offer some kind of passive, under-argued Twitter comment on how unacceptably unethical it is — do some analysis and empirics on the trade-offs! And read up on surgical procedures while you are at it.
In case you think any of us understand the world very well:
Oncologists have stumbled on a “previously unnoticed” pair of salivary glands while studying the effect of radiotherapy on salivation and swallowing. The elusive glands are in an inaccessible spot and can be spotted only with very sensitive imaging, such as positron emission tomography and computed tomography. Researchers say the glands could help to explain why cancer treatment can cause dry mouth and swallowing problems, especially because doctors haven’t known to spare the organs from damage.
The study of economics does not seem to require any specialized gifts of an unusually high order. Is it not, intellectually regarded, a very easy subject compared with the higher branches of philosophy and pure science? Yet good, or even competent, economists are the rarest of birds. An easy subject, at which very few excel! The paradox finds its explanation, perhaps, in that the master-economist must possess a rare combination of gifts. He must reach a high standard in several different directions and must combine talents not often found together. He must be mathematician, historian, statesman, philosopher – in some degree. He must understand symbols and speak in words. He must contemplate the particular in terms of the general, and touch abstract and concrete in the same flight of thought. He must study the present in the light of the past for the purposes of the future. No part of man’s nature of his institutions must lie entirely outside his regard. He must be purposeful and disinterested in a simultaneous mood; as aloof and incorruptible as an artist, yet sometimes as near the earth as a politician.
That is from Keynes’s 1924 essay on Marshall, reprinted in Essays in Biography. Most of all, it is Keynes describing himself!
“If quantum computing actually materializes, in the sense that it’s a large scale, reliable computing option for us, then we’re going to enter a completely different era of simulation,” Davoudi says. “I am starting to think about how to perform my simulations of strong interaction physics and atomic nuclei if I had a quantum computer that was viable.”
All of these factors have led Davoudi to speculate about the simulation hypothesis. If our reality is a simulation, then the simulator is likely also discretizing spacetime to save on computing resources (assuming, of course, that it is using the same mechanisms as our physicists for that simulation). Signatures of such discrete spacetime could potentially be seen in the directions high-energy cosmic rays arrive from: they would have a preferred direction in the sky because of the breaking of so-called rotational symmetry.
Telescopes “haven’t observed any deviation from that rotational invariance yet,” Davoudi says. And even if such an effect were to be seen, it would not constitute unequivocal evidence that we live in a simulation. Base reality itself could have similar properties.
Here is further discussion from Anil Anathaswamy. Via Robert Nelsen. As you may already know, my view is that there is no proper external perspective, and the concept of “living in a simulation” is not obviously distinct from living in a universe that follows some kind of laws, whether natural or even theological. The universe is simultaneously the simulation and the simulator itself! Anything “outside the universe doing the simulating” is then itself “the (mega-)universe that is simultaneously the simulation and the simulator itself.” etc.
Here is my 2x normal length Bloomberg column on that topic, as had been requested by Daniel Klein. The argument has numerous twists and turns, do read the whole thing but here is one bit (I will indent only their words):
“Here are the key words of the Great Barrington Declaration on herd immunity:
The most compassionate approach that balances the risks and benefits of reaching herd immunity, is to allow those who are at minimal risk of death to live their lives normally to build up immunity to the virus through natural infection, while better protecting those who are at highest risk. We call this Focused Protection.
What exactly does the word “allow” mean in this context? Again the passivity is evident, as if humans should just line up in the proper order of virus exposure and submit to nature’s will. How about instead we channel our inner Ayn Rand and stress the role of human agency? Something like: “Herd immunity will come from a combination of exposure to the virus through natural infection and the widespread use of vaccines. Here are some ways to maximize the role of vaccines in that process.”
And the close:
“In most parts of the Western world, normal openings for restaurants, sporting events and workplaces are likely to lead to spiraling caseloads and overloaded hospitals, as is already a risk in some of the harder-hit parts of Europe. Reopenings, to the extent they work, rely on a government that so scares people that attendance remains low even with reopening.
In that sense, as things stand, there is no “normal” to be found. An attempt to pursue it would most likely lead to panic over the numbers of cases and hospitalizations, and would almost certainly make a second lockdown more likely. There is no ideal of liberty at the end of the tunnel here.
Don’t get me wrong: The Great Barrington strategy is a tempting one. Coming out of a libertarian think tank, it tries to procure maximum liberty for commerce and daily life. It is a seductive idea. Yet consistency of message is not an unalloyed good, even when the subject is liberty…
My worldview is both more hopeful and more tragic. There is no normal here, but we can do better — with vigorous actions to combat Covid-19, including government actions. The conception of human nature evident in the Great Barrington Declaration is so passive, it raises the question of whether it even qualifies as a defense of natural liberty.”
MR Tyler again: You will note I do not make the emotional, question-begging argument that herd immunity strategies will kill millions (though I do think more people die under that scenario). If you argue, as many herd immunity critics do, that the elderly cannot be isolated, it seems you also should not be entirely confident that the currently non-infected can be isolated. The brutal truth is simply that a Great Barrington strategy put into practice would lead to rapidly spiraling cases and a rather quick and oppressive second lockdown, worse than what the status quo or some improved version of it is likely to bring. Total deaths are likely higher, along with more social trauma, due to the more extreme whipsaw effects, but no not by millions.
Let’s accelerate those biomedicals, people!