Excellent and interesting throughout, here is the transcript, video, and audio. Here is part of the summary:
He joined Tyler for a conversation about which areas of science are making progress, the factors that have made research more expensive, why government should invest more in R&D, how lean management transformed manufacturing, how India’s congested legal system inhibits economic development, the effects of technology on Scottish football hooliganism, why firms thrive in China, how weak legal systems incentivize nepotism, why he’s not worried about the effects of remote work on American productivity (in the short-term), the drawbacks of elite graduate programs, how his first “academic love” shapes his work today, the benefits of working with co-authors, why he prefers periodicals and podcasts to reading books, and more.
Here is an excerpt:
COWEN: If I understand your estimates correctly, efficacy per researcher, as you measure it, is falling by about 5 percent a year [paper here]. That seems phenomenally high. What’s the mechanism that could account for such a rapid decline?
BLOOM: The big picture — just to make sure everyone’s on the same page — is, if you look in the US, productivity growth . . . In fact, I could go back a lot further. It’s interesting — you go much further, and you think of European and North American history. In the UK that has better data, there was very, very little productivity growth until the Industrial Revolution. Literally, from the time the Romans left in whatever, roughly 100 AD, until 1750, technological progress was very slow.
Sure, the British were more advanced at that point, but not dramatically. The estimates were like 0.1 percent a year, so very low. Then the Industrial Revolution starts, and it starts to speed up and speed up and speed up. And technological progress, in terms of productivity growth, peaks in the 1950s at something like 3 to 4 percent a year, and then it’s been falling ever since.
Then you ask that rate of fall — it’s 5 percent, roughly. It would have fallen if we held inputs constant. The one thing that’s been offsetting that fall in the rate of progress is we’ve put more and more resources into it. Again, if you think of the US, the number of research universities has exploded, the number of firms having research labs.
Thomas Edison, for example, was the first lab about 100 years ago, but post–World War II, most large American companies have been pushing huge amounts of cash into R&D. But despite all of that increase in inputs, actually, productivity growth has been slowing over the last 50 years. That’s the sense in which it’s harder and harder to find new ideas. We’re putting more inputs into labs, but actually productivity growth is falling.
COWEN: Let’s say paperwork for researchers is increasing, bureaucratization is increasing. How do we get that to be negative 5 percent a year as an effect? Is it that we’re throwing kryptonite at our top people? Your productivity is not declining 5 percent a year, or is it? COVID aside.
BLOOM: COVID aside. Yeah, it’s hard to tell your own productivity. Oddly enough, I always feel like, “Ah, you know, the stuff that I did before was better research ideas.” And then something comes along. I’d say personally, it’s very stochastic. I find it very hard to predict it. Increasingly, it comes from working with basically great, and often younger, coauthors.
Why is it happening at the aggregate level? I think there are three reasons going on. One is actually come back to Ben Jones, who had an important paper, which is called, I believe, “[Death of the] Renaissance Man.” This came out 15 years ago or something. The idea was, it takes longer and longer for us to train.
Just in economics — when I first started in economics, it was standard to do a four-year PhD. It’s now a six-year PhD, plus many of the PhD students have done a pre-doc, so they’ve done an extra two years. We’re taking three or four years longer just to get to the research frontier. There’s so much more knowledge before us, it just takes longer to train up. That’s one story.
A second story I’ve heard is, research is getting more complicated. I remember I sat down with a former CEO of SRI, Stanford Research Institute, which is a big research lab out here that’s done many things. For example, Siri came out of SRI. He said, “Increasingly it’s interdisciplinary teams now.”
It used to be you’d have one or two scientists could come up with great ideas. Now, you’re having to combine a couple. I can’t remember if he said for Siri, but he said there are three or four different research groups in SRI that were being pulled together to do that. That of course makes it more expensive. And when you think of biogenetics, combining biology and genetics, or bioengineering, there’s many more cross-field areas.
Then finally, as you say, I suspect regulation costs, various other factors are making it harder to undertake research. A lot of that’s probably good. I’d have to look at individual regulations. Health and safety, for example, is probably a good idea, but in the same way, that is almost certainly making it more expensive to run labs…
COWEN: What if I argued none of those are the central factors because, if those were true as the central factors, you would expect the wages of scientists, especially in the private sector, to be declining, say by 5 percent a year. But they’re not declining. They’re mostly going up.
Doesn’t the explanation have to be that scientific efforts used to be devoted to public goods much more, and now they’re being devoted to private goods? That’s the only explanation that’s consistent with rising wages for science but a declining social output from her research, her scientific productivity.
COWEN: What exactly is the value of management consultants? Because to many outsiders, it appears absurd that these not-so-well-trained young people come in. They tell companies what to do. Sometimes it’s even called fraudulent if they command high returns. How does this work? What’s the value added?
What determines the success of a COVID-19 Test & Trace policy? We use an SEIR agent-based model on a graph, with realistic epidemiological parameters. Simulating variations in certain parameters of Testing & Tracing, we find that important determinants of successful containment are: (i) the time from symptom onset until a patient is self-isolated and tested, and (ii) the share of contacts of a positive patient who are successfully traced. Comparatively less important is (iii) the time of test analysis and contact tracing. When the share of contacts successfully traced is higher, the Test & Trace Time rises somewhat in importance. These results are robust to a wide range of values for how infectious presymptomatic patients are, to the amount of asymptomatic patients, to the network degree distribution and to base epidemic growth rate. We also provide mathematical arguments for why these simulation results hold in more general settings. Since real world Test & Trace systems and policies could affect all three parameters, Symptom Onset to Test Time should be considered, alongside test turnaround time and contact tracing coverage, as a key determinant of Test & Trace success.
That is from a new paper by Ofir Reich.
Formal pre-doc programmes have burgeoned, especially in elite universities such as Harvard, Stanford, the University of Chicago and Yale. Participants clean and analyse data, write papers and do administrative tasks. In exchange they may receive free or subsidised classes, a salary in the region of $50,000, potential co-authorship of the papers they work on, and, most prized of all, a letter of recommendation to a top programme.
In part pre-docs show how economic research has changed. “Economics has become more like the sciences in terms of both the methods and the production process,” says Raj Chetty of Harvard, who directs the Opportunity Insights team, a group with a reputation for working its pre-docs hard. When analysing tax records that gave access only to a certain number of people, he switched away from using part-time research assistants to a lab-like team, inspired by his own family of scientists. As bigger data sets, new techniques and generous funding made such collaboration worthwhile, others followed.
Here is much more on pre-docs from Soumaya Keynes at The Economist. I suspect this development is inevitable, but I see at least two things going on here. First, letter writers are internalizing the very high value of those letters in the form of personal services received. Second, this will push out “weirdos” and make the profession more homogenized, more obedient, more elite, more dependent on school of origin, and less interesting. I do understand the value of the training received, and don’t propose any mechanism to “stop this,” but overall it does not make me an entirely happy camper.
One part of the mycelium had access to a big patch of phosphorus. Another part had access to a small patch. She was interested in how this would affect the fungus’s trading decisions in different parts of the same network. Some recognizable patterns emerged. In parts of a mycelial network where phosphorus was scarce, the plant paid a higher “price,” supplying more carbon to the fungus for every unit of phosphorus it received. Where phosphorus was more readily available, the fungus received a less favorable “exchange rate.” The “price” of phosphorus seemed to be governed by the familiar dynamics of supply and demand.
Most surprising was the way that the fungus coordinated its trading behavior across the network. Kiers identified a strategy of “buy low, sell high.” The fungus actively transported phosphorus — using its dynamic microtubule “motors” — from areas of abundance, where it fetched a low price when exchanged with a plant root, to areas of scarcity, where it was in higher demand and fetched a higher price. By doing so, the fungus was able to transfer a greater proportion of its phosphorus to the plant at the more favorable exchange rate, thus receiving larger quantities of carbon in return.
We still do not understand how those behaviors are controlled. And that is all from the new and excellent Merlin Sheldrake book Entangled Life: How Fungi Make Our Worlds, Change Our Minds, & Shape Our Futures.
Let’s not give them Twitter:
They found that the bat noises are not just random, as previously thought, reports Skibba. They were able to classify 60 percent of the calls into four categories. One of the call types indicates the bats are arguing about food. Another indicates a dispute about their positions within the sleeping cluster. A third call is reserved for males making unwanted mating advances and the fourth happens when a bat argues with another bat sitting too close. In fact, the bats make slightly different versions of the calls when speaking to different individuals within the group, similar to a human using a different tone of voice when talking to different people. Skibba points out that besides humans, only dolphins and a handful of other species are known to address individuals rather than making broad communication sounds.
I don’t view this as a formal answer, but it is interesting nonetheless:
Mycelium is how fungi feed. Some organisms — such as plants that photosynthesize — make their own food. Some organisms — like most animals — find food in the world and put it inside their bodies, where it is digested and absorbed. Fungi have a different strategy. They digest the world where it is and then absorb it into their bodies…
The difference between animals and fungi is simple: Animals put food in their bodies, whereas fungi put their bodies in the food.
…to embed oneself is an irregular and unpredictable food supply as mycelium does, one must be able to shape-shift. Mycelium is an living, growing, opportunistic investigation — speculation in bodily form.
That is from the new and excellent book by Merlin Sheldrake, Entangled Life: How Fungi Make Our Worlds, Change Our Minds, & Shape Our Futures.
Fungi are prodigious decomposers, but of their many biochemical achievements, one of the most impressive is this ability of white rot fungi to break down the lignin in wood. Based on their ability to release free radicals, the peroxidases produced b white rot fungi perform what is technically known as “radical chemistry.” “Radical” has it right. These enzymes have forever changed the way that carbon journeys through its earthly cycles. Today, fungal decomposition — much of it of woody plant matter — is one of the largest sources of carbon emissions, emitting about eighty-five gigatons of carbon to the atmosphere every year. In 2018, the combustion of fossil fuels by humans emitted around ten gigatons.
That is from the new and excellent book by Merlin Sheldrake, Entangled Life: How Fungi Make Our Worlds, Change Our Minds & Shape Our Futures.
From an email from Agustin Lebron, noting that I will impose no further indentation:
“One thing that’s worth noting:
The degree of excitement about GPT-3 as a replacement for human workers, or as a path to AGI, is strongly inversely correlated with:
(a) How close the person is to the actual work. If you look at the tweets from Altman, Sutskever and Brockman, they’re pumping the brakes pretty hard on expectations.
(b) How much the person has actually built ML systems.
It’s a towering achievement to be able to train a system this big. But to me it’s clearly a dead-end on the way to AGI:
– The architecture itself is 3 years old: https://arxiv.org/abs/1706.03762. It is not an exaggeration to say that GPT-3’s architecture can be described as “take that 2017 paper and make 3 numbers (width, # layers, # heads) much bigger”. The fact that there hasn’t been any improvement in architecture in 3 years is quite telling.
– In the paper itself, the authors clearly say they’re quite near fundamental limits in being able to train an architecture like this. GPT-3 isn’t a starting point, it’s an end-point.
– If you look at more sober assessments (http://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html, https://minimaxir.com/2020/07/gpt3-expectations/), without the tweet selection bias, it starts to look less impressive.
– Within my fairly heterogeneous circle of ML-expert friends, there’s little disagreement about dead-end-ness.
The most interesting thing about GPT-3 is the attention and press that it’s gotten. I’m still not sure what to make of that, but it’s very notable.
Again, it’s incredibly impressive and also piles of fun, but I’m willing to longbet some decent money that we’re not replacing front-end devs with attention-layer-stacks anytime soon.”
I remain bullish, but it is always worth considering other opinions.
I pick the United Kingdom, even though their public health response has been generally poor. Why? Their researchers have discovered the single-best mortality-reducing treatment, namely dexamethasone (the cheap steroid), and the Oxford vaccine is arguably the furthest along. In a world where ideas are global public goods, research matters more than the quality of your testing regime!
And the very recent results on interferon beta — still unconfirmed I should add — come from…the UK.
At the very least, the UK is a clear first in per capita terms. Here are the closing two paragraphs:
It is fine and even correct to lecture the British (and the Americans) for their poorly conceived messaging and public health measures. But it is interesting how few people lecture the Australians or the South Koreans for not having a better biomedical research establishment. It is yet another sign of how societies tend to undervalue innovation — which makes the U.K.’s contribution all the more important.
Critics of Brexit like to say that it will leave the U.K. as a small country of minor import. Maybe so. In the meantime, the Brits are on track to save the world.
Here is my full Bloomberg column on that topic. And if you wish to go a wee bit Straussian on this one, isn’t it better if the poor performers on public health measures — if there are going to be some — are (sometimes) the countries with the best and most dynamic biomedical establishments? Otherwise all the panic and resultant scurry amounts to nothing. When Mexico has a poor public health response to Covid-19, the world doesn’t get that much back in return. In this regard, I suspect that biomedical innovation in the United States is more sensitive to internal poor performance on Covid-19 than is the case for Oxford.
According to a research paper accepted for publication in the Journal of the British Interplanetary Society, extraterrestrials are sleeping while they wait. In the paper, authors from Oxford’s Future of Humanity Institute and the Astronomical Observatory of Belgrade Anders Sandberg, Stuart Armstrong, and Milan Cirkovic argue that the universe is too hot right now for advanced, digital civilizations to make the most efficient use of their resources. The solution: Sleep and wait for the universe to cool down, a process known as aestivating (like hibernation but sleeping until it’s colder).
The universe appears to be cooling down on its own. Over the next trillions of years, as it continues to expand and the formation of new stars slows, the background radiation will reduce to practically zero. Under those conditions, Sandberg and Cirkovic explain, this kind of artificial life would get “tremendously more done.” Tremendous isn’t an understatement, either. The researchers calculate that by employing such a strategy, they could achieve up to 1030 times more than if done today. That’s a 1 with 30 zeroes after it.
You know, turkeys sound a lot like aliens, if you just name them part by part.
“Anatomical structures on the head and throat of a domestic turkey. 1. Caruncles, 2. Snood, 3. Wattle (Dewlap), 4. Major caruncle, 5. Beard”
That is from Jackson Stone.
I agree with the author’s claim that climate change is not an existential risk for humanity. Still, both the title and subtitle bother me. The alarm does not seem to be a false one, even if many of the worriers make grossly overstated claims about the end of the earth. And right now “climate change panic” is not costing us “trillions,” rather virtually all countries are failing to reduce their carbon emissions and most are not even trying very hard.
There should be more of a focus on the insurance value of avoiding the worst plausible scenarios, which are still quite bad. There is no argument in this book which overturns the Weitzman-like calculations that preventive measures are desirable.
I can report that the author endorses a carbon tax, more investment in innovation, and greater adaptation, with geoengineering as a back-up plan, more or less the correct stance in my view.
There is much in this book of value, and the criticisms of the exaggerated worriers are mostly correct. Still, the oppositional framing of the material doesn’t seem appropriate these days, and Lomborg will have to choose whether he wishes to be “leader of the opposition,” or “provider of the best possible message.” Or has he already chosen?
…Big Five Conscientiousness was not found to correlate with mask wearing in a sample of thousands in Spain during the coronavirus epidemic (Barceló & Sheen, 2020). This was not treated by the authors as any kind of falsification of the Big Five, or even evidence against it. The abstract noun “conscientiousness” has a rich meaning, only part of which is captured by the Big Five, and only a tinier part of which is captured by the two-question methodology used here (“does a thorough job” and “tends to be lazy”). But Conscientiousness is often correlated to health behaviors, and is often said to predict them with various strengths, even though the questions in the survey focus on job performance and tidiness.
I will be doing a Conversation with him, so what should I ask? Here is part of his official bio:
Nicholas (Nick) Bloom is the William Eberle Professor of Economics at Stanford University, a Senior Fellow of SIEPR, and the Co-Director of the Productivity, Innovation and Entrepreneurship program at the National Bureau of Economic Research. His research focuses on management practices and uncertainty. He previously worked at the UK Treasury and McKinsey & Company.
Is there anyone whose name is on more important/interesting papers over the last ten years? Here is a sampling.
So what should I ask him?
They are reopening campus for the coming semester and here is one reason why:
…the finding from Cornell researchers that holding the semester online potentially could result in more infections and more hospitalizations among students and staff members than holding the semester in person would.
A study by Cornell researchers concluded that with nominal parameters, an in-person semester would result in 3.6 percent of the campus population (1,254 people) becoming infected, and 0.047 percent (16 people) requiring hospitalization. An online semester, they concluded, would result in about 7,200 infections and more than 60 hospitalizations.
Do note it is critical to the argument that the returning students actually are tested on a regular basis, which of course is very hard to enforce on-line.