Category: Science
My excellent Conversation with Philip Ball
Here is the audio, video, and transcript. Here is part of the episode summary:
Tyler and Philip discuss how well scientists have stood up to power historically, the problematic pressures scientists feel within academia today, artificial wombs and the fertility crisis, the price of invisibility, the terrifying nature of outer space and Gothic cathedrals, the role Christianity played in the Scientific Revolution, what current myths may stick around forever, whether cells can be thought of as doing computation, the limitations of The Selfish Gene, whether the free energy principle can be usefully applied, the problem of microplastics gathering in testicles and other places, progress in science, his favorite science fiction, how to follow in his footsteps, and more.
Here is one excerpt, namely the opening bit:
TYLER COWEN: Hello, everyone, and welcome back to Conversations with Tyler. Today I’ll be chatting with Philip Ball. I think of Philip this way. We’ve had over 200 guests on Conversations with Tyler, and I think three of them, so far, have shown they are able to answer any question I might plausibly throw their way. Philip, I believe, is number four. He’s a scientist with degrees in chemistry and physics. He’s written about 30 books on different sciences. Both he and I have lost count.
He was an editor at Nature for about 20 years. His books cover such diverse topics as chemistry, physics, the history of experiments, social science, color, the elements, water, water in China, Chartres Cathedral, music, and more. But most notably, he has a new book out this year, a major work called How Life Works: A User’s Guide to the New Biology. Philip, welcome.
PHILIP BALL: Thank you, Tyler. Lovely to be here.
COWEN: What is the situation in history where scientists have most effectively stood up to power, not counting Jewish scientists, say, leaving Nazi Germany or the Soviet Union?
BALL: Gosh, now there’s a question to start with. Where they have most effectively stood up to power — this is a question that I looked at in a book (it must be about 10 years old now) which looked at the response of German physicists during the Nazi era to that regime. I’m afraid my conclusion was, the response was really not very impressive at all.
On the whole, the scientists acquiesced to what the regime wanted them to do. Very few of them were actively sympathetic to the Nazi party, but they mounted no real effective opposition whatsoever. I’m afraid that looking at that as a case study, really, made me realize that it’s actually very hard to find any time in history where scientists have actively mounted an effective opposition to that kind of imposition of some kind of ideology, or political power, or whatever. History doesn’t give us a very encouraging view of that.
That said, I think it’s fair to say, science is doing better these days. I think there’s a recognition that at an institutional level, science needs to be able to mobilize its resources when it’s threatened in this way. I think we’re starting to see that, certainly, with climate change. Scientists have come under fire a huge amount in that arena. I think there’s more institutional understanding of what to do about that. Scientists aren’t being so much left to their own devices to cope as best they can individually.
But I think that there’s this attitude that is still somewhat prevalent within science, that’s a bit like, “We’re above that.” This is exactly what some of the German physicists, particularly Werner Heisenberg, said during the Nazi regime, that science is somehow operating in a purer sphere, and that it’s removed from all the nastiness and the dirtiness that goes on in the political arena.
I think that that attitude hasn’t gone completely, but I think it needs to go. I think scientists need to get real, really, about the fact that they are working within a social and political context that they have to be able to work with, and to be able to — when the occasion demands it — take some control of, and not simply be pushed around by.
That, I think, is something that can only happen when there are institutional structures to allow it to happen, so that scientists are not left to their own individual devices and their own individual sense of morality to do something about it. I’m hoping that science will do better in the future than it’s done in the past.
COWEN: Which do you think are the power structures today that current scientists, say in the Anglo world, are most in thrall to?
Recommended, there are numerous topics of interest. I also asked GPT how much money it could earn if it had the powers of Wells’s Invisible Man.
From Reed and Logchies
Introduction. Your analysis produces a statistically insignificant estimate. Is it because the effect is negligibly different from zero? Or because your research design does not have sufficient power to achieve statistical significance? Alternatively, you read that “The median statistical power [in empirical economics] is 18%, or less” (Ioannidis et al., 2017) and you wonder if the article you are reading also has low statistical power. By the end of this blog, you will be able to easily answer both questions. Without doing any programming.
An Online App. In this post, we show how to calculate statistical power post-estimation for those who are not familiar with R. To do that, we have created a Shiny App that does all the necessarily calculating for the researcher (CLICK HERE).
Here is the link and the full story.
Obama’s space legacy?
Bucking his central planning instincts, Obama embraced a surprisingly laissez-faire approach to space flight that angered political allies and opponents alike.
In doing so, however, he tapped a reservoir of ingenuity and innovation that has ushered in a new age of space flight and exploration…
In her forthcoming book Bureaucrats and Billionaires, former NASA deputy administrator Lori Garver and reporter Michael Sheetz trace the origins of NASA’s commercial crew program, a revolutionary human spaceflight program that joins private aerospace manufacturers such SpaceX and Boeing with NASA’s astronauts.
Garver writes that this hybrid allows space flight “at a fraction of the cost of previous government owned and operated systems.” A decade ago, however, the program faced opposition seemingly from every side.
The saga began early in 2010 when President Obama announced his intention to abort NASA’s Constellation program—NASA’s crew spaceflight program—correctly pointing out it was “over budget, behind schedule, and lacking in innovation.”
The decision angered almost everyone. As Garver and Sheetz write, the program was “extremely popular with Congress, and the contractors who were benefiting from the tax dollars coming their way.” An impressive array of stakeholders from aerospace companies, trade associations, and astronauts to lobbyists, Congressional delegations, and NASA pushed back.
The resistance was immense.
NASA chief Charles Bolden, while choking back tears, compared the decision to “a death in the family.” Pulitzer Prize winning columnist Charles Krauthammer ominously noted the move would give the Russians “a monopoly on rides into space.” Congressman Pete Olson (R-Texas) called the decision “a crippling blow to America’s human spaceflight program.”
Few commentators seemed to even notice the $6 billion in spending over five years to support commercially built spacecraft to launch NASA’s astronauts into outer space…
By pulling the plug on Constellation, Obama had unleashed the power of markets and competition. While many associate competition with dog-eat-dog and survival of the fittest tropes, competition is a healthy and productive force.
Here is the full story, by John Miltimore at FEE (!). Via Matt Yglesias.
Does increasing division of labor lead to greater credentialism?
That is the theme of my latest Bloomberg column, here is one excerpt:
Consider business. For decades now, big businesses have been on the rise in the US, which means employment in large corporations that use a team approach is increasingly likely. One effect of this is that individual outputs are harder to measure. If a product does well, it is often not clear who should get the credit, because the inputs of so many people were involved in creating it.
It is difficult to recalibrate incentives to reflect this changing reality. Often companies respond by enforcing greater credentialism, trying to ensure that everyone is a worthwhile contributor. That could involve looking for an Ivy League education or a standout GitHub profile. Either way, companies are more likely to look for ex ante signals of quality and less likely to take chances on true outsiders, because if the outsider isn’t pulling their weight, it might not be evident for a long time.
And this:
The real losers in the team system are those who do not have the temperament for all the schooling and credential-gathering. Those credentials of course include recommendations from well-known contacts, so networking and socializing have become increasingly important. This is a workable situation for most people but a frustrating arrangement for others.
Some recent evidence indicates this problem is especially serious in the world of science. The number of authors on scientific papers has been rising sharply, a trend I have observed in my own field of economics. It was once rare for the research paper of a fresh job-market candidate to be co-authored; now it is common. The work may be wonderful, but how can you tell how much any one author contributed? In the natural and biological sciences, one paper can have dozens of co-authors.
Again, credentialism will become more important, not less. In relative terms, someone from MIT listed on a multiple-authored paper is more attractive than someone from Iowa State University.
The latter part of the piece also explains why we underinvest in databases, and in turn in LLMs. It is difficult to reward people, under current structures, for contributing to such a broad collective enterprise.
Okie-dokie, solve for the equilibrium
One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world’s most challenging problems. Our code is open-sourced at this https URL
That is from a new paper by Chris Lu, Cong Lu, Robert Tjarko Lange, Jakob Foerster, Jeff Clune, David Ha. Note this is related to some earlier work in economics by Benjamin Manning of MIT (with co-authors).
I’ve said it before, and I’ll say it again. The marginal product of LLMs is when they are interacting with well-prepared, intricately cooperating humans at their peak, not when you pose them random queries for fun.
Beware research in large teams
Teamwork has become more important in recent decades. We show that larger teams generate an unintended side effect: individuals who finish their PhD when the average team in their field is larger have worse career prospects. Our analysis combines data on career outcomes from the Survey of Doctorate Recipients with publication data that measures team size from ISI Web of Science. As average team size in a field increased over time, junior academic scientists became less likely to secure research funding or obtain tenure and were more likely to leave academia relative to their older counterparts. The team size effect can fully account for the observed decline in tenure prospects in academic science. The rise in team size was not associated with the end of mandatory retirement. However, the doubling of the NIH budget was associated with a significant increase in team size. Our results demonstrate that academic science has not adjusted its reward structure, which is largely individual, in response to team science. Failing to address these concerns means a significant loss as junior scientists exit after a costly and specialized education in science.
That is from a new NBER working paper by
My excellent Conversation with Paul Bloom
Here is the audio, video, and transcript. Here is part of the episode summary:
Together Paul and Tyler explore whether psychologists understand day-to-day human behavior any better than normal folk, how babies can tell if you’re a jerk, at what age children have the capacity to believe in God, why the trend in religion is toward monotheism, the morality of getting paid to strangle cats, whether disgust should be built into LLMs, the possibilities of AI therapists, the best test for a theory of mind, why people overestimate Paul’s (and Tyler’s) intelligence, why flattery is undersupplied, why we should train flattery and tax empathy, Carl Jung, Big Five personality theory, Principles of Psychology by William James, the social psychology of the Hebrew Bible, his most successful unusual work habit, what he’ll work on next, and more.
And here is one excerpt:
COWEN: I have some questions about intelligence for you. If we think of large language models, should we let them feel disgust so that they avoid left-wing bias?
BLOOM: [laughs] Why would disgust make them avoid left-wing bias?
COWEN: Maybe we’re not sure it would, but there are various claims in the literature that for people on the right, disgust is a more fundamental emotion, and that a greater capacity to feel disgust encourages people in some ways to be more socially conservative. Debatable, but I don’t think it’s a crazy view. So, if you build LLMs, and you give them, say, a lot of empathy and not much or any disgust, you’re going to get left-leaning LLMs, which you might say, “Well, that was my goal.” But obviously, not everyone will accept that conclusion either.
BLOOM: I wouldn’t want woke LLMs. I think there’s a lot in extreme —
COWEN: You’ve got them, of course.
BLOOM: I’ve got them. I think Gemini is the one, if I wanted to go — the woke LLM of choice. Because I think the doctrine called wokeness leads to a lot of moral problems and makes the world worse in certain ways, but I wouldn’t mind left-wing LLMs.
In fact, I’m not a fan of disgust. You’re right that disgust is often associated with right-wing, but in the very worst instantiation of it. Disgust is what drives hatred towards gay people. It involves hatred of interracial marriage, the exclusion of immigrants, the exclusion of other races. If there’s one emotion I would take away from people, it would be disgust, at least disgust in the moral realm. They could keep their disgust towards rotten food and that sort of thing. That’s the one thing I wouldn’t put into LLMs. I’d rather put anger, pity, gratitude. Disgust is the one thing I’d keep away.
COWEN: So, you wouldn’t just cut back on it at the margin. You would just take disgust out of people if you could?
And:
COWEN: I think at the margin, I’ve moved against empathy more being a podcast host, that I’ll ask a question —
BLOOM: Wait. Why being a podcast host?
COWEN: Well, I’ll ask a question, and a lot of guests think it’s high status simply to signal empathy rather than giving a substantive answer. The signaling-empathy answers I find quite uninteresting, and I think a lot of my listeners do, too. Yet people will just keep on doing this, and I get frustrated. Then I think, “Well, Tyler, you should turn a bit more against empathy for this reason.” And I think that’s correct.
Paul cannot be blamed for doing that, however. So substantive, interesting, and entertaining throughout.
Dark oxygen: jubiliant for others, cry for yourself and your kin
To summarize the new results:
An international team of researchers recently discovered that oxygen is being made by potato-shaped metallic nodules deep under the surface of the Pacific Ocean. In July, their findings, which throw into dispute the concepts of oxygen production, were published in the Nature Geoscience jonal. The discovery could lead to a reconsideration of the origins of complex life on Earth.
The findings from a team of researchers led by Professor Andrew Sweetman at the U.K.’s Scottish Association for Marine Science, show that oxygen is being produced at around 4,000 metres below the surface of the ocean in complete darkness. This contradicts previous scientific assumptions that only living organisms, including plants and algae, can use energy to create oxygen through photosynthesis, using sunlight for the reaction.
As Julian Gough suggests, most life probably is on icy moons. This means a lot more life! Over a time slice, it could mean billions of additional lives out there. Did you pop up the champagne?
The bad news is that the chance that Robin Hanson’s “Great Filter” lies behind us is somewhat smaller. Which boosts the chance that it may lie in our near future. Did you pull out the tissues?
On net, did this news change your mood at all? Why or why not?
The Unseen Fallout: Chernobyl’s Deadly Air Pollution Legacy
A fascinating new paper The Political Economic Determinants of Nuclear Power: Evidence from Chernobyl by Makarin, Qian, and Wang was recently presented at the NBER Pol. Economy conference. The paper is nominally about how fossil fuel companies and coal miners in the US and UK used the Chernobyl disaster to successfully lobby against building more nuclear power plants. The data collection here is impressive but that is just how democracy works. I found the political economy section less interesting than some of the background material.
First, the Chernobyl disaster ended nuclear power plant (NPP) construction in the United States (top-left panel), the country with the most NPPs in the world . Surprisingly, the Three Mile Island accident in 1979 (much less serious than Chernobyl) had very little effect on construction; albeit the 1-2 punch with Chernobyl in 1986 surely didn’t help. The same pattern is very clear across all countries and also all democracies (top-right panel). The bottom two panels show the same data but looking at new plants rather than the cumulative total–there was a sharp break in 1986 with growth quickly converging to zero new plants per year.
Fewer nuclear plants than otherwise would have been the case might have made a disaster less likely but there were countervailing forces:
We document that the decline in new NPPs in democracies after Chernobyl was accompanied by an increase in the average age of the NPPs in use. To satisfy the rise in energy demand, reactors built prior to Chernobyl continued operating past their initially scheduled retirement dates. Using data on NPP incident reports, we show that such plants are more likely to have accidents. The data imply that Chernobyl resulted in the continued operation of older and more dangerous NPPs in the democracies.
Moreover, safety declined because the existing plants got older but in addition “the slowdown of new NPP construction…delayed the adoption of new safer plants.” This is a point about innovation that I have often emphasized (see also here)
The key to innovation is continuous refinement and improvement…. Learning by doing requires doing….Thus, when considering innovation today, it’s essential to think about not only the current state of technology but also about the entire trajectory of development. A treatment that’s marginally better today may be much better tomorrow.
Regulation increased costs substantially:
The U.S. NRC requires six-to-seven-years to approve NPPs. The total construction time afterwards ranges from decades to indefinite. Cost overruns and changing regulatory requirements during the construction process sometime forces construction to be abandoned after significant sunk costs have been made. This often leads investors to abandon construction after already sunk billions of dollars of investment. Worldwide, companies have stopped construction on 90 reactors since the 1980s. 40 of those were in the U.S. alone. For example, in 2017, two South Carolina utilities abandoned two unfinished Westinghouse AP1000 reactors due to significant construction delays and cost overruns. At the time, this left two other U.S. AP1000 reactors under construction in Georgia. The original cost estimate of $14 billion for these two reactors rose to $23 billion. Construction only continued when the U.S. federal government promised financial support. These were the first new reactors in the U.S. in decades. In contrast, recent NPPs in China have taken only four to six years and $2 billion dollars per reactor. When considering the choice of investing in nuclear energy versus fossil fuel energy, note that a typical natural gas plant takes approximately two years to construct (Lovering et al., 2016).
Chernobyl, to be clear, was a very costly disaster
The initial emergency response, together with later decontamination of the environment, required more than 500,000 personnel and an estimated US$68 billion (2019 USD). Between five and seven percent of government spending in Ukraine is still related to Chernobyl. (emphasis added, AT) In Belarus, Chernobyl-related expenses fell from twenty-two percent of the national budget in 1991 to six percent by 2002.
The biggest safety effect of the decline in nuclear power plants was the increase in air pollution. The authors use satellite date on ambient particles to show that when a new nuclear plant comes online pollution in nearby cities declines significantly. Second, they use the decline in pollution to create preliminary estimates of the effect of pollution on health:
According to our calculations, the construction of an additional NPP, by reducing the total suspended particles (TSP) in the ambient environment, could on average save 816,058 additional life years.
According to our baseline estimates (Table 1), over the past 38 years, Chernobyl reduced the total number of NPPs worldwide by 389, which is almost entirely driven by the slowdown of new construction in democracies. Our calculations thus suggest that, globally, more than 318 million expected life years have been lost in democratic countries due to the decline in NPP growth in these countries after Chernobyl.
The authors use the Air Quality Life Index from the University of Chicago which I think is on the high side of estimates. Nevertheless, as you know, I think the new air pollution literature is credible (also here) so I think the bottom line is almost certainly correct. Namely, Chernobyl caused many more deaths by reducing nuclear power plant construction and increasing air pollution than by its direct effects which were small albeit not negligible.
Operation Warp Speed for Cows
The UK Health Security Agency has raised their pandemic threat level for H5N1 bird flu from a 3 to a 4 on a 6 point scale.
My takeaway is that we have completely failed to stem the outbreak in cattle, there has been animal to human transmission which we are surely undercounting, but so far the virus has not mutated in a way to make it very adaptable to humans.
The failure to stem the outbreak in cattle is concerning because it suggests we would not be able to stem a human outbreak. We can easily test, quarantine and cull cattle!
It is absolutely outrageous that dairy farmers are refusing to cooperate on testing:
To date dairy farmers have, in large measure, refused to cooperate with efforts to chart how deeply the virus has infiltrated U.S. herds, seeing the possible stigma of admitting they have H5N1-infected cows as a greater risk than the virus itself.
We should be testing at much higher rates and quarantining and culling. The dairy farmers should be and are being compensated but frankly the farmers should have no say in the matter of testing. Externalities! Preventing a pandemic is much cheaper both in resources and in restrictions on liberty than dealing with one.
And how about an Operation Warp Speed for a vaccine for cows? Vaccinate. Vacca! It’s right there in the name! If only we could come up with a clever acronym for an Operation Warp Speed for COWS.
Developing a vaccine for cows would also speed up a human vaccine if one were needed.
Here are some key points from the UK HSA:
There is ongoing transmission of influenza A(H5N1) in the US, primarily through dairy cattle but with multispecies involvement including poultry, wild birds, other mammals (cats, rodents, wild mammals) and humans (1, 2). There is high uncertainty regarding the trajectory of the outbreak and there is no apparent reduction in transmission in response to the biosecurity measures that have been introduced to date. There is ongoing debate about whether the current outbreak should be described as sustained transmission given that transmission is likely to be facilitated by animal farming activities (3). However, given that this is a permanent context, the majority of the group considered this outbreak as sustained transmission with the associated risks.
…There is evidence of zoonotic transmission (human cases acquired from animals). There is likely to be under-ascertainment of mild zoonotic cases.
..Overall, there is no evidence of change in HA which is suggestive of human adaptation through these acquired mutations. Although genomic surveillance data are likely to lag behind infections, the lack of evidence of viral adaptation to α2,6SA receptors after thousands of dairy cattle infected may suggest that transmission within cows does not strongly predispose to human receptor adaptation. Evidence of which sialic acid receptors are present in cows, which is needed to support this hypothesis, is still preliminary and requires confirmation.
Global warming, and rate effects vs. level effects
There is a very interesting new paper on this topic byIshan B. Nath, Valerie A. Ramey, and Peter J. Klenow. Here is the abstract:
Does a permanent rise in temperature decrease the level or growth rate of GDP in affected countries? Differing answers to this question lead prominent estimates of climate damages to diverge by an order of magnitude. This paper combines indirect evidence on economic growth with new empirical estimates of the dynamic effects of temperature on GDP to argue that warming has persistent, but not permanent, effects on growth. We start by presenting a range of evidence that technology flows tether country growth rates together, preventing temperature changes from causing growth rates to diverge permanently. We then use data from a panel of countries to show that temperature shocks have large and persistent effects on GDP, driven in part by persistence in temperature itself. These estimates imply projected future impacts that are three to five times larger than level effect estimates and two to four times smaller than permanent growth effect estimates, with larger discrepancies for initially hot and cold countries.
Here is one key part of the intuition:
We present a range of evidence that global growth is tied together across countries, which suggests that country-specific shocks are unlikely to cause permanent changes in country-level growth rates…Relatedly, we find that differences in levels of income across countries persist strongly, while growth differences tend to be transitory.
Another way to make the point is that one’s model of the process should be consistent with a pre-carbon explosion model of income differences (have you ever seen those media articles about how heat from climate change supposedly is making us stupider, with no thought as to further possible implications of that result? Mood affiliation at work there, of course).
After the authors go through all of their final calculations, 3.7 degrees Centigrade of warming reduces global gdp by 7 to 12 percent by 2099, relative to no warming at all. For sub-Saharan Africa, gdp falls by 21 percent, but for Europe gdp rises by 0.6 percent, again by 2099.
The authors also work through just how sensitive the results are to what is a level effect and what is a growth effect. For instance, if a warmer Europe leads to a permanent growth-effect projection, Europe would see a near-doubling of income, compared to the no warming scenario. The reduction in African gdp would be 88 percent, not just 21 percent.
By the way, the authors suggest the growth bliss point for a country (rat’ mal!) is thirteen degrees Centigrade.
This paper has many distinct moving parts, and thus it is difficult to pin down what is exactly the right answer, a point the authors stress rather than try to hide. In any case it represents a major advance of thought in this very difficult area.
From Google DeepMind (it’s happening)
We’re presenting the first AI to solve International Mathematical Olympiad problems at a silver medalist level. It combines AlphaProof, a new breakthrough model for formal reasoning, and AlphaGeometry 2, an improved version of our previous system.
Here is further information.
From the NYT three days ago “A.I. Can Write Poetry, but It Struggles With Math.” From the NYT today: “Move Over, Mathematicians, Here Comes AlphaProof.” And here is one opinion: “This type of A.I. learns by itself and can scale indefinitely, said Dr. Silver, who is Google DeepMind’s vice-president of reinforcement learning.” Okie-dokie!
Not Lost In Translation: How Barbarian Books Laid the Foundation for Japan’s Industrial Revoluton
Japan’s growth miracle after World War II is well known but that was Japan’s second miracle. The first was perhaps even more miraculous. At the end of the 19th century, under the Meiji Restoration, Japan transformed itself almost overnight from a peasant economy to an industrial powerhouse.
After centuries of resisting economic and social change, Japan transformed from a relatively poor, predominantly agricultural economy specialized in the exports of unprocessed, primary products to an economy specialized in the export of manufactures in under fifteen years.
In a remarkable new paper, Juhász, Sakabe, and Weinstein show how the key to this transformation was a massive effort to translate and codify technical information in the Japanese language. This state-led initiative made cutting-edge industrial knowledge accessible to Japanese entrepreneurs and workers in a way that was unparalleled among non-Western countries at the time.
Here’s an amazing graph which tells much of the story. In both 1870 and 1910 most of the technical knowledge of the world is in French, English, Italian and German but look at what happens in Japan–basically no technical books in 1870 to on par with English in 1910. Moreover, no other country did this.
Translating a technical document today is much easier than in the past because the words already exist. Translating technical documents in the late 19th century, however, required the creation and standardization of entirely new words.
…the Institute of Barbarian Books (Bansho Torishirabesho)…was tasked with developing English-Japanese dictionaries to facilitate technical translations. This project was the first step in what would become a massive government effort to codify and absorb Western science. Linguists and lexicographers have written extensively on the difficulty of scientific translation, which explains why little codification of knowledge happened in languages other than English and its close cognates: French and German (c.f. Kokawa et al. 1994; Lippert 2001; Clark 2009). The linguistic problem was two-fold. First, no words existed in Japanese for canonical Industrial Revolution products such as the railroad, steam engine, or telegraph, and using phonetic representations of all untranslatable jargon in a technical book resulted in transliteration of the text, not translation. Second, translations needed to be standardized so that all translators would translate a given foreign word into the same Japanese one.
Solving these two problems became one of the Institute’s main objectives.
Here’s a graph showing the creation of new words in Japan by year. You can see the explosion in new words in the late 19th century. Note that this happened well after the Perry Mission. The words didn’t simply evolve, the authors argue new words were created as a form of industrial policy.
By the way, AstralCodexTen points us to an interesting biography of a translator at the time who works on economics books:
[Fukuzawa] makes great progress on a number of translations. Among them is the first Western economics book translated into Japanese. In the course of this work, he encounters difficulties with the concept of “competition.” He decides to coin a new Japanese word, kyoso, derived from the words for “race and fight.” His patron, a Confucian, is unimpressed with this translation. He suggests other renderings. Why not “love of the nation shown in connection with trade”? Or “open generosity from a merchant in times of national stress”? But Fukuzawa insists on kyoso, and now the word is the first result on Google Translate.
There is a lot more in this paper. In particular, showing how the translation of documents lead to productivity growth on an industry by industry basis and a demonstration of the importance of this mechanism for economic growth across the world.
The bottom line for me is this: What caused the industrial revolution is a perennial question–was it coal, freedom, literacy?–but this is the first paper which gives what I think is a truly compelling answer for one particular case. Japan’s rapid industrialization under the Meiji Restoration was driven by its unprecedented effort to translate, codify, and disseminate Western technical knowledge in the Japanese language.
Disappearing polymorphs
Here’s a wild phenomena I wasn’t previously aware of: In crystallography and materials science, a polymorph is a solid material that can exist in more than one crystal structure while maintaining the same chemical composition. Diamond and graphite are two polymorphs of carbon. Diamond is carbon crystalized with an isometric structure and graphite is carbon crystalized with a hexagonal structure. Now imagine that one day your spouse’s diamond ring turns to graphite! That’s unlikely with carbon but it happens with other polymorphs when a metastable (locally) stable version becomes seeded with a stable version.
The drug ritonavir originally used for AIDS (and also a component of the COVID medication Paxlovid), for example, was created in 1996 but in 1998 it couldn’t be produced any longer. Despite the best efforts of the manufacturer, Abbott, every time they tried to create the old ritonavir a new crystalized version (form II) was produced which was not medically effective. The problem was that once form II exists it’s almost impossible to get rid of it and microscopic particles of form II ritonavir seeded any attempt to create form I.
Form II was of sufficiently lower energy that it became impossible to produce Form I in any laboratory where Form II was introduced, even indirectly. Scientists who had been exposed to Form II in the past seemingly contaminated entire manufacturing plants by their presence, probably because they carried over microscopic seed crystals of the new polymorph.
Wikipedia continues:
In the 1963 novel Cat’s Cradle, by Kurt Vonnegut, the narrator learns about Ice-nine, an alternative structure of water that is solid at room temperature and acts as a seed crystal upon contact with ordinary liquid water, causing that liquid water to instantly freeze and transform into more Ice-nine. Later in the book, a character frozen in Ice-nine falls into the sea. Instantly, all the water in the world’s seas, rivers, and groundwater transforms into solid Ice-nine, leading to a climactic doomsday scenario.
Given the last point you will perhaps not be surprised to learn that the hat tip goes to Eliezer Yudkowsky who worries about such things.
Why isn’t there an economics of animal welfare field?
On Friday I was keynote speaker at a quite good Brown University conference on this topic. I, like some of the other people there, wondered why animal welfare does not have its own economics journal, own association, own JEL code, and own mini-field, much as cultural economics or defense economics developed several decades ago. How about its own blog or Twitter feed? You might even say there is a theorem of sorts: if an economics subfield can exist, it will. And so I think this subfield is indeed on the way. Perhaps it needs one or two name-recognized economists to publish a paper on the topic in a top five journal? Whoever writes such a breakthrough piece will be cited for a long time to come, even if many of those citations will not be in top-tier journals. Will that person be you?
I do understand there is plenty about animal welfare in ag econ journals and departments, but somehow the way the world is tiered that just doesn’t count. Yes that is unfair, but the point remains that this subfield remains an underexploited intellectual profit opportunity.
Addendum: Here is a new piece by Cass Sunstein.