Category: Science
Global warming, and rate effects vs. level effects
There is a very interesting new paper on this topic byIshan B. Nath, Valerie A. Ramey, and Peter J. Klenow. Here is the abstract:
Does a permanent rise in temperature decrease the level or growth rate of GDP in affected countries? Differing answers to this question lead prominent estimates of climate damages to diverge by an order of magnitude. This paper combines indirect evidence on economic growth with new empirical estimates of the dynamic effects of temperature on GDP to argue that warming has persistent, but not permanent, effects on growth. We start by presenting a range of evidence that technology flows tether country growth rates together, preventing temperature changes from causing growth rates to diverge permanently. We then use data from a panel of countries to show that temperature shocks have large and persistent effects on GDP, driven in part by persistence in temperature itself. These estimates imply projected future impacts that are three to five times larger than level effect estimates and two to four times smaller than permanent growth effect estimates, with larger discrepancies for initially hot and cold countries.
Here is one key part of the intuition:
We present a range of evidence that global growth is tied together across countries, which suggests that country-specific shocks are unlikely to cause permanent changes in country-level growth rates…Relatedly, we find that differences in levels of income across countries persist strongly, while growth differences tend to be transitory.
Another way to make the point is that one’s model of the process should be consistent with a pre-carbon explosion model of income differences (have you ever seen those media articles about how heat from climate change supposedly is making us stupider, with no thought as to further possible implications of that result? Mood affiliation at work there, of course).
After the authors go through all of their final calculations, 3.7 degrees Centigrade of warming reduces global gdp by 7 to 12 percent by 2099, relative to no warming at all. For sub-Saharan Africa, gdp falls by 21 percent, but for Europe gdp rises by 0.6 percent, again by 2099.
The authors also work through just how sensitive the results are to what is a level effect and what is a growth effect. For instance, if a warmer Europe leads to a permanent growth-effect projection, Europe would see a near-doubling of income, compared to the no warming scenario. The reduction in African gdp would be 88 percent, not just 21 percent.
By the way, the authors suggest the growth bliss point for a country (rat’ mal!) is thirteen degrees Centigrade.
This paper has many distinct moving parts, and thus it is difficult to pin down what is exactly the right answer, a point the authors stress rather than try to hide. In any case it represents a major advance of thought in this very difficult area.
From Google DeepMind (it’s happening)
We’re presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level. It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system.
Here is further information.
From the NYT three days ago “A.I. Can Write Poetry, but It Struggles With Math.” From the NYT today: “Move Over, Mathematicians, Here Comes AlphaProof.” And here is one opinion: “This type of A.I. learns by itself and can scale indefinitely, said Dr. Silver, who is Google DeepMind’s vice-president of reinforcement learning.” Okie-dokie!
Not Lost In Translation: How Barbarian Books Laid the Foundation for Japan’s Industrial Revoluton
Japan’s growth miracle after World War II is well known but that was Japan’s second miracle. The first was perhaps even more miraculous. At the end of the 19th century, under the Meiji Restoration, Japan transformed itself almost overnight from a peasant economy to an industrial powerhouse.
After centuries of resisting economic and social change, Japan transformed from a relatively poor, predominantly agricultural economy specialized in the exports of unprocessed, primary products to an economy specialized in the export of manufactures in under fifteen years.
In a remarkable new paper, Juhász, Sakabe, and Weinstein show how the key to this transformation was a massive effort to translate and codify technical information in the Japanese language. This state-led initiative made cutting-edge industrial knowledge accessible to Japanese entrepreneurs and workers in a way that was unparalleled among non-Western countries at the time.
Here’s an amazing graph which tells much of the story. In both 1870 and 1910 most of the technical knowledge of the world is in French, English, Italian and German but look at what happens in Japan–basically no technical books in 1870 to on par with English in 1910. Moreover, no other country did this.
Translating a technical document today is much easier than in the past because the words already exist. Translating technical documents in the late 19th century, however, required the creation and standardization of entirely new words.
…the Institute of Barbarian Books (Bansho Torishirabesho)…was tasked with developing English-Japanese dictionaries to facilitate technical translations. This project was the first step in what would become a massive government effort to codify and absorb Western science. Linguists and lexicographers have written extensively on the difficulty of scientific translation, which explains why little codification of knowledge happened in languages other than English and its close cognates: French and German (c.f. Kokawa et al. 1994; Lippert 2001; Clark 2009). The linguistic problem was two-fold. First, no words existed in Japanese for canonical Industrial Revolution products such as the railroad, steam engine, or telegraph, and using phonetic representations of all untranslatable jargon in a technical book resulted in transliteration of the text, not translation. Second, translations needed to be standardized so that all translators would translate a given foreign word into the same Japanese one.
Solving these two problems became one of the Institute’s main objectives.
Here’s a graph showing the creation of new words in Japan by year. You can see the explosion in new words in the late 19th century. Note that this happened well after the Perry Mission. The words didn’t simply evolve, the authors argue new words were created as a form of industrial policy.
By the way, AstralCodexTen points us to an interesting biography of a translator at the time who works on economics books:
[Fukuzawa] makes great progress on a number of translations. Among them is the first Western economics book translated into Japanese. In the course of this work, he encounters difficulties with the concept of “competition.” He decides to coin a new Japanese word, kyoso, derived from the words for “race and fight.” His patron, a Confucian, is unimpressed with this translation. He suggests other renderings. Why not “love of the nation shown in connection with trade”? Or “open generosity from a merchant in times of national stress”? But Fukuzawa insists on kyoso, and now the word is the first result on Google Translate.
There is a lot more in this paper. In particular, showing how the translation of documents lead to productivity growth on an industry by industry basis and a demonstration of the importance of this mechanism for economic growth across the world.
The bottom line for me is this: What caused the industrial revolution is a perennial question–was it coal, freedom, literacy?–but this is the first paper which gives what I think is a truly compelling answer for one particular case. Japan’s rapid industrialization under the Meiji Restoration was driven by its unprecedented effort to translate, codify, and disseminate Western technical knowledge in the Japanese language.
Disappearing polymorphs
Here’s a wild phenomena I wasn’t previously aware of: In crystallography and materials science, a polymorph is a solid material that can exist in more than one crystal structure while maintaining the same chemical composition. Diamond and graphite are two polymorphs of carbon. Diamond is carbon crystalized with an isometric structure and graphite is carbon crystalized with a hexagonal structure. Now imagine that one day your spouse’s diamond ring turns to graphite! That’s unlikely with carbon but it happens with other polymorphs when a metastable (locally) stable version becomes seeded with a stable version.
The drug ritonavir originally used for AIDS (and also a component of the COVID medication Paxlovid), for example, was created in 1996 but in 1998 it couldn’t be produced any longer. Despite the best efforts of the manufacturer, Abbott, every time they tried to create the old ritonavir a new crystalized version (form II) was produced which was not medically effective. The problem was that once form II exists it’s almost impossible to get rid of it and microscopic particles of form II ritonavir seeded any attempt to create form I.
Form II was of sufficiently lower energy that it became impossible to produce Form I in any laboratory where Form II was introduced, even indirectly. Scientists who had been exposed to Form II in the past seemingly contaminated entire manufacturing plants by their presence, probably because they carried over microscopic seed crystals of the new polymorph.
Wikipedia continues:
In the 1963 novel Cat’s Cradle, by Kurt Vonnegut, the narrator learns about Ice-nine, an alternative structure of water that is solid at room temperature and acts as a seed crystal upon contact with ordinary liquid water, causing that liquid water to instantly freeze and transform into more Ice-nine. Later in the book, a character frozen in Ice-nine falls into the sea. Instantly, all the water in the world’s seas, rivers, and groundwater transforms into solid Ice-nine, leading to a climactic doomsday scenario.
Given the last point you will perhaps not be surprised to learn that the hat tip goes to Eliezer Yudkowsky who worries about such things.
Why isn’t there an economics of animal welfare field?
On Friday I was keynote speaker at a quite good Brown University conference on this topic. I, like some of the other people there, wondered why animal welfare does not have its own economics journal, own association, own JEL code, and own mini-field, much as cultural economics or defense economics developed several decades ago. How about its own blog or Twitter feed? You might even say there is a theorem of sorts: if an economics subfield can exist, it will. And so I think this subfield is indeed on the way. Perhaps it needs one or two name-recognized economists to publish a paper on the topic in a top five journal? Whoever writes such a breakthrough piece will be cited for a long time to come, even if many of those citations will not be in top-tier journals. Will that person be you?
I do understand there is plenty about animal welfare in ag econ journals and departments, but somehow the way the world is tiered that just doesn’t count. Yes that is unfair, but the point remains that this subfield remains an underexploited intellectual profit opportunity.
Addendum: Here is a new piece by Cass Sunstein.
The class gap in academic career progression
There is a new and excellent paper by Anna Stansbury and Kyra Rodriguez on this topic:
Unlike gender or race, class is rarely a focus of research or DEI efforts in elite US occupations. Should it be? In this paper, we document a large class gap in career progression in one labor market: US tenure-track academia. Using parental education to proxy for socioeconomic background, we compare career outcomes of people who got their PhDs in the same institution and field (excluding those with PhD parents). First-generation college graduates are 13% less likely to end up tenured at an R1, and are on average tenured at institutions ranked 9% lower, than their PhD classmates with a parent with a (non-PhD) graduate degree. We explore three sets of mechanisms: (1) research productivity, (2) networks, and (3) preferences. Research productivity can explain less than a third of the class gap, and preferences explain almost none. Our analyses of coauthor characteristics suggest networks likely play a role. Finally, examining PhDs who work in industry we find a class gap in pay and in managerial responsibilities which widens over the career. This means a class gap in career progression exists in other US occupations beyond academia.
Here is a first-rate tweet storm by Stansbury on the paper. Via Aidan Finley.
The intelligent chicken culture that is Canada
A British Columbia chicken earned a Guinness World Record by identifying different numbers, colors and letters.
Gabriola Island veterinarian Emily Carrington said she bought five hyline chickens last year to produce eggs, and she soon started training the hens to identify magnetic letters and numbers.
“Their job was to only peck the number or letter that I taught them to peck and ignore the other ones. Even if I add a whole bunch of other letters that aren’t the letter they are supposed to peck, they will just peck the letter that I trained them to peck,” Carrington told the Nanaimo News Bulletin.
Carrington decided to have all of her chickens attempt the Guinness World Records title for the most tricks by a chicken in one minute.
One of the chickens, Lacy, emerged as the clear winner of the flock, correctly identifying 6 letters, numbers and colors in one minute.
The focused nature of the tricks led Guinness World Records to create a new category for Lacy: the most identifications by a chicken in one minute.
Here is the full story, via the excellent Samir Varma.
How Many Workers Did It Take to Build the Great Pyramid of Giza?
The Great Pyramid of Giza was built circa 2600 BC and was the world’s tallest structure for nearly 4000 years. It consists of an estimated 2.3 million blocks with a weight on the order of 6-7 million tons. How many people did it take to construct the Great Pyramid? Vaclav Smil in Numbers Don’t Lie gives an interesting method of calculation:
The Great Pyramid’s potential energy (what is required to lift the mass above ground level) is about 2.4 trillion joules. Calculating this is fairly easy: it is simply the product of the acceleration due to gravity, the pyramid’s mass, and its center of mass (a quarter of its height)…I am assuming a mean of 2.6 tons per cubic meter and hence a total mass of about 6.75 million tons.
People are able to convert about 20 percent of food energy into useful work, and for hard-working men that amounts to about 440 kilojoules a day. Lifting the stones would thus require about 5.5 million labor days (2.4 trillion/44000), or about 275,000 days a year during [a] 20 year period, and about 900 people could deliver that by working 10 hours a day for 300 days a year. A similar number might be needed to emplace the stones in the rising structure and then smooth the cladding blocks…And in order to cut 2.6 million cubic meters of stone in 20 years, the project would have required about 1,500 quarrymen working 300 days per year and producing 0.25 cubic meters of stone per capita…the grand total would then be some 3,300 workers. Even if we were to double that in order to account for designers, organizers and overseers etc. etc….the total would be still fewer than 7,000 workers.
…During the time of the pyramid’s construction, the total population of Egypt was 1.5-1.6 million people, and hence the deployed force of less than 10,000 would not have amounted to any extraordinary imposition on the country’s economy.
I was surprised at the low number and pleased at the unusual method of calculation. Archeological evidence from the nearby worker’s village suggests 4,000-5,000 on site workers, not including the quarrymen, transporters and designers and support staff. Thus, Smil’s calculation looks very good.
What other unusual calculations do you know?

Nuclear is Not Best Everywhere
Australia is having a debate over nuclear power. Hamilton and Heeney weigh in with an important perspective:
On the basis of many conversations about Australian energy policy over the years, we can divide the proponents of nuclear energy into three groups.
The first might be called the “ideologues”. They favour nuclear not because of its zero emissions, but despite it. Indeed, many are climate sceptics. They hate renewables because the left loves them, and they favour nuclear because the left hates it.
The second might be called the “engineers”. They favour nuclear energy because it’s cool. Like a Ferrari, they marvel at its performance and stability. They see it as the energy source of the future. The stuff of science fiction.
The third might be called the “pragmatists”. They are not super attentive or highly informed about the intricacies of energy policy. They superficially believe nuclear can serve as a common-sense antidote to the practical shortcomings of renewables.
Conspicuously absent are those who might be called the “economists”. They couldn’t care less about exactly how electrons are produced. They simply want the cheapest possible energy that meets a minimum standard of reliability and emissions.
On the basis of the economics, Hamilton and Heeney conclude that nuclear is expensive for Australia:
The CSIRO estimates the cost of 90 per cent renewables, with firming, transmission, and integration costs included, at $109 per megawatt hour. Based on South Korean costs (roughly one-third of the US and Europe), a 60-year lifespan, a 60 per cent economic utilisation rate (as per coal today), and an eight-year build time (as per the global average), nuclear would cost $200 per megawatt hour – nearly double.
The same electrons delivered with the same reliability, just twice as expensive under what is a fairly optimistic scenario.
Note–this is taking into account that nuclear is available when the sun doesn’t shine and the winds don’t blow–so are batteries.
I suspect that Hamilton and Heeney are right on the numbers but it’s this argument that I find most compelling:
If you need external validation of these basic economics, look no further than the opposition’s own announcement. Rather than lift the moratorium and allow private firms to supply nuclear energy if it’s commercially viable, the opposition has opted for government to be the owner and operator. A smoking gun of economic unviability if ever there were one.
I am optimistic about the potential of small modular reactors (SMRs) based on innovative designs. These reactors can ideally be located near AI facilities. As I argued in the Marginal Revolution Theory of Innovation, innovation is a dynamic process; success rarely comes on the first attempt. The key to innovation is continuous refinement and improvement. These small reactors based on different technologies give as an opportunity to refine and improve. To achieve this, we must overhaul our regulatory framework, which has disproportionately burdened nuclear energy—our greenest power source—with excessive regulation compared to more hazardous and less environmentally friendly technologies.
Electrons are electrons. We should allow all electricity generation technologies to compete in the market on an equal footing. Let the best technologies win.
Needed in Empirical Social Science: Numbers
By Aaron S. Edlin and Michael Love:
Knowing the magnitude and standard error of an empirical estimate is much more important than simply knowing the estimate’s sign and whether it is statistically significant. Yet, we find that even in top journals, when empirical social scientists choose their headline results – the results they put in abstracts – the vast majority ignore this teaching and report neither the magnitude nor the precision of their findings. They provide no numerical headline results for 63%±3% of empirical economics papers and for a whopping 92% ± 1% of empirical political science or sociology papers between 1999 and 2019. Moreover, they essentially never report precision (0.1% ± 0.1%) in headline results. Many social scientists appear wedded to a null hypothesis testing culture instead of an estimation culture. There is another way: medical researchers routinely report numerical magnitudes (98%±1%) and precision (83% ± 2%) in headline results. Trends suggest that economists, but not political scientists or sociologists, are warming to numerical reporting: the share of empirical economics articles with numerical headline results doubled since 1999, and economics articles with numerical headline results get more citations (+19% ± 11%).
Via somebody on Twitter?
“US-based academics and those at top-ranked institutions exhibit higher egocentrism and toxicity in their tweets”
Compared to other academics, that is. They are still more reasonable than the general public on Twitter. Here is the paper by Prashant Garg and Thiemo Fetzer. Here is a useful tweet storm about the paper. Via Kris Gulati.
China fact of the day
In 2022 more of the top-tier ai researchers working in America hailed from China than from America.
Of course the Canadians do their part to help make this come true. Here is a much longer survey from The Economist about science in China. Via Z.
*The Wrong Stuff: How the Soviet Space Program Crashed and Burned*
By John Strausbaugh, an excellent book. Here is one good passage of many:
Putting dogs on top of rockets was nothing new. Since so little was known about the effects that blasting off in a rocket might have on th ehuman body and brain — the g-force of acceleration, the disorientation of weightlessness, the impact of radiation, the g-force of deceleration — the Soviets and the Americans both had been using various species of animals to test conditions since the 1940s. The Americans started sending up fruit flies aboard their White Sands V-2s in 1947. An anesthetized rhesus monkey they named Albert II…went up eighty-three miles in a V-2 in 1949. Unfortunately, his parachute failed to oepn on reentry and he was smashed to death on impact with the ground. The Americans continued to send up primates in the 1940s and 1950s. Something like two-thirds of them died. They used many other species as well, maybe the oddest of which was black bears, who were strapped into a rocket-powered sled at a facility with the deceptively sweet name the Daisy Track to test the physical effects of ultra-rapid acceleration and deceleration.
Recommended.
A virtual rodent predicts the structure of neural activity across behaviors
Animals have exquisite control of their bodies, allowing them to perform a diverse range of behaviors. How such control is implemented by the brain, however, remains unclear. Advancing our understanding requires models that can relate principles of control to the structure of neural activity in behaving animals. To facilitate this, we built a ‘virtual rodent’, in which an artificial neural network actuates a biomechanically realistic model of the rat in a physics simulator. We used deep reinforcement learning to train the virtual agent to imitate the behavior of freely-moving rats, thus allowing us to compare neural activity recorded in real rats to the network activity of a virtual rodent mimicking their behavior. We found that neural activity in the sensorimotor striatum and motor cortex was better predicted by the virtual rodent’s network activity than by any features of the real rat’s movements, consistent with both regions implementing inverse dynamics. Furthermore, the network’s latent variability predicted the structure of neural variability across behaviors and afforded robustness in a way consistent with the minimal intervention principle of optimal feedback control. These results demonstrate how physical simulation of biomechanically realistic virtual animals can help interpret the structure of neural activity across behavior and relate it to theoretical principles of motor control.
Here is the new Nature article by Diego Aldarnado, et.al. Via @sebkrier.
Terence Tao on AI and mathematics
With formalization projects, what we’ve noticed is that you can collaborate with people who don’t understand the entire mathematics of the entire project, but they understand one tiny little piece. It’s like any modern device. No single person can build a computer on their own, mine all the metals and refine them, and then create the hardware and the software. We have all these specialists, and we have a big logistics supply chain, and eventually we can create a smartphone or whatever. Right now, in a mathematical collaboration, everyone has to know pretty much all the mathematics, and that is a stumbling block, as [Scholze] mentioned. But with these formalizations, it is possible to compartmentalize and contribute to a project only knowing a piece of it. I think also we should start formalizing textbooks. If a textbook is formalized, you can create these very interactive textbooks, where you could describe the proof of a result in a very high-level sense, assuming lots of knowledge. But if there are steps that you don’t understand, you can expand them and go into details—all the way down the axioms if you want to. No one does this right now for textbooks because it’s too much work. But if you’re already formalizing it, the computer can create these interactive textbooks for you. It will make it easier for a mathematician in one field to start contributing to another because you can precisely specify subtasks of a big task that don’t require understanding everything.
The entire interview is worth reading. As Adam Smith once said…