An elementary mathematical theory based on “selectivity” is proposed to address a question raised by Charles Darwin, namely, how one gender of a sexually dimorphic species might tend to evolve with greater variability than the other gender. Briefly, the theory says that if one sex is relatively selective then from one generation to the next, more variable subpopulations of the opposite sex will tend to prevail over those with lesser variability; and conversely, if a sex is relatively non-selective, then less variable subpopulations of the opposite sex will tend to prevail over those with greater variability. This theory makes no assumptions about differences in means between the sexes, nor does it presume that one sex is selective and the other non-selective. Two mathematical models are presented: a discrete-time one-step statistical model using normally distributed fitness values; and a continuous-time deterministic model using exponentially distributed fitness levels.
That is from a new paper by Theodore P. Hill, via Derek. Here is some of the history behind the paper, which ended up being “spiked.” And here is Andrew Gelman’s take. Here are relevant emails to the dispute.
Even with a question mark my title, Do Boys Have a Comparative Advantage in Math and Science, is likely to appear sexist. Am I suggesting the boys are better at math and science than girls? No, I am suggesting they might be worse.
Consider first the so-called gender-equality paradox, namely the finding that countries with the highest levels of gender equality tend to have the lowest ratios of women to men in STEM education. Stoet and Geary put it well:
Finland excels in gender equality (World Economic Forum, 2015), its adolescent girls outperform boys in science literacy, and it ranks second in European educational performance (OECD, 2016b). With these high levels of educational performance and overall gender equality, Finland is poised to close the STEM gender gap. Yet, paradoxically, Finland has one of the world’s largest gender gaps in college degrees in STEM fields, and Norway and Sweden, also leading in gender-equality rankings, are not far behind (fewer than 25% of STEM graduates are women). We will show that this pattern extends throughout the world…
Two explanations for this apparent paradox have been offered. First, countries with greater gender equality tend to be richer and have larger welfare states than countries with less gender equality. As a result, less is riding on choice of career in the richer, gender-equal countries. Even if STEM fields pay more, we would expect small differences in personality that vary with gender would become more apparent as income increases. Paraphrasing John Adams, only in a rich country are people feel free to pursue their interests more than their needs. If women are somewhat less interested in STEM fields than men, then we would expect this difference to become more apparent as income increases.
A second explanation focuses on ability. Some people argue that more men than women have extraordinary ability levels in math and science because of greater male variability in most characteristics. Let’s put that hypothesis to the side. Instead, lets think about individuals and their relative abilities in reading, science and math–this what Stoet and Geary call an intra-individual score. Now consider the figure below which is based on PISA test data from approximately half a million students across many countries. On the left are raw scores (normalized). Focus on the colors, red is for reading, blue is science and green is mathematics. Negative scores (scores to the left of the vertical line) indicate that females scores higher than males, positive scores that males score higher on average than females. Females score higher than males in reading in every country surveyed. Females also score higher than males in science and math in some countries.
Now consider the data on the right. In this case, Stoet and Geary ask for each student what subject are they relatively best at and then they average by country. The differences by sex are now even even more prominent. Not only are females better at reading but even in countries where they are better at math and science than boys on average they are relatively better at reading.
Thus, even when girls outperformed boys in science, as was the case in Finland, girls generally performed even better in reading, which means that their individual strength was, unlike boys’ strength, reading.
Now consider what happens when students are told. Do what you are good at! Loosely speaking the situation will be something like this: females will say I got As in history and English and B’s in Science and Math, therefore, I should follow my strengthens and specialize in drawing on the same skills as history and English. Boys will say I got B’s in Science and Math and C’s in history and English, therefore, I should follow my strengths and do something involving Science and Math.
On average, females have about the same average grades in UP (“University Preparation”, AT) math and sciences courses as males, but higher grades in English/French and other qualifying courses that count toward the top 6 scores that determine their university rankings. This comparative advantage explains a substantial share of the gender difference in the probability of pursing a STEM major, conditional on being STEM ready at the end of high school.
Put (too) simply the only men who are good enough to get into university are men who are good at STEM. Women are good enough to get into non-STEM and STEM fields. Thus, among university students, women dominate in the non-STEM fields and men survive in the STEM fields.
Finally Stoet and Geary show that the above considerations also explain the gender-equality paradox because the intra-individual differences are largest in the most gender equal countries. In the figure below on the left are the intra individual differences in science by gender which increase with gender equality. A higher score means that boys are more likely to have science as a relative strength (i.e. women may get absolutely better at everything with gender equality but the figure suggests that they get relatively better at reading) and on the right the share of women going into STEM fields which decreases with gender equality.
The male dominance in STEM fields is usually seen as due to a male advantage and a female disadvantage (whether genetic, cultural or otherwise). Stoet and Geary show that the result could instead be due to differences in relative advantage. Indeed, the theory of comparative advantage tells us that we could push this even further than Stoet and Geary. It could be the case, for example, that males are worse on average than females in all fields but they specialize in the field in which they are the least worst, namely science and math. In other words, boys could have an absolute disadvantage in all fields but a comparative advantage in math and science. I don’t claim that theory is true but it’s worth thinking about a pure case to understand how the same pattern can be interpreted in diametrically different ways.
That is the new and excellent book by David Quammen, here is one summary excerpt:
We are not precisely who we thought we were. We are composite creatures, and our ancestry seems to arise from a dark zone of the living world, a group of creatures about which science, until recent decades, was ignorant. Evolution is tricker, far more intricate, than we had realized. The tree of life is more tangled. Genes don’t move just vertically. they can also pass laterally across species boundaries, across wider gaps, even between different kingdoms of life, and some have come sideways into our own lineage — the primate lineage — from unsuspected, nonprimate sources. It’s the genetic equivalent of a blood transfusion or (different metaphor, preferred by some scientists) an infection that transforms identity. “Infective heredity.” I’ll say more about that in its place.
My favorite part of the book is the section, starting on p.244, on bacteria that are resistant to antibiotics that have not yet been invented. Overall this is likely to prove the best popular science book of the year, you can buy it here. Here are various reviews of the book.
A paper in Science covering over 80 thousand articles in 923 scientific journals finds that rejected papers are ultimately cited more than first acceptances.
We compared the number of times articles were cited (as of July 2011, i.e., 3 to 6 years after publication, from ISI Web of Science) depending on their being first-intents or resubmissions. We used methods robust to the skewed distribution of citation counts to ensure that a few highly cited articles were not driving the results. We controlled for year of publication, publishing journal (and thus impact factor), and their interaction. Resubmissions were significantly more cited than first-intents published the same year in the same journal.
The author’s argue that the most likely explanation is that peer review increases the quality of manuscripts. Rejection makes you stronger. That’s possible although the data are also consistent with peers being more likely to reject better papers!
Papers in economics are often too long but papers in Science are often too short. Consider the paragraph I quoted above. What is the next piece of information that you are expecting to learn? How many more citations does a resubmission receive! It’s bizarre that the paper never gives this number (as far as I can tell). Moreover, take a look at the figure (at right) that accompanies the discussion. The authors say the difference in citations is highly significant but on this figure (which is on log scale) the difference looks tiny! This figure is taking up a lot of space. What is it saying?
So what’s the number? Well if you go to the online materials section the authors still don’t state the number but from a table one can deduce that resubmissions receive approximately 7.5% more citations. That’s not bad but we never learn how many citations first acceptances receive so it could be less than 1 extra citation.
There’s something else which is odd. The authors say that about 75% of
published articles are first-submissions. But top journals like Science and Nature reject 93% or more of submissions. Those two numbers don’t necessarily contradict. If everyone submits to Science first or if everyone never resubmits to Science then 100% of papers published in Science will be a first submission. Nevertheless, the 93% of papers that are rejected at top journals (and lower ranked journals also have high rejectance rates) are going somewhere so for the system as whole 75% seems implausibly high.
Econ papers sometimes exhaust me with robustness tests long after I have been convinced of the basic result but Science papers often leave me puzzled about basic issues of context and interpretation. This is also puzzling. Shouldn’t more important papers be longer? Or is the value of time of scientists higher than economists so it’s optimal for scientists to both write and read shorter papers? The length of law review articles would suggest that lawyers have the lowest value of time except that doesn’t seem to be reflected in their consulting fees or wages. There is a dissertation to be written on the optimal length of scientific publications.
Scientists in developed countries provide nearly three times as many peer reviews per paper submitted as researchers in emerging nations, according to the largest ever survey of the practice.
The report — which surveyed more than 11,000 researchers worldwide — also finds a growing “reviewer fatigue”, with editors having to invite more reviewers to get each review done. The number rose from 1.9 invitations in 2013 to 2.4 in 2017…
The report notes that finding peer reviewers is becoming harder, even as the overall volume of publications rises globally (see ‘Is reviewer fatigue setting in?’).
File under “the cost disease strikes back.” Furthermore, it seems increasingly obvious that a lot of lesser journals just don’t matter, and that may discourage prospective referees from putting in the effort. And note:
In 2013–17, the United States contributed nearly 33% of peer reviews, and published 25.4% of articles worldwide. By contrast, emerging nations did 19% of peer reviews, and published 29% of all articles.
China stood out — the country accounted for 13.8% of scientific articles during the period, but did only 8.8% of reviews.
Nearly thirty years ago my GMU colleague Robin Hanson asked, Could Gambling Save Science? We now know that the answer is yes. Robin’s idea to gauge the quality of scientific theories using prediction markets, what he called idea futures, has been validated. Camerer et al. (2018), the latest paper from the Social Science Replication Project, tried to replicate 21 social-science studies published in Nature or Science between 2010 and 2015. Before the replications were run the authors run a prediction market–as they had done on previous replication research–and once again the prediction market did a very good job predicting which studies would replicate and which would not.
Ed Yong summarizes in the Atlantic:
Consider the new results from the Social Sciences Replication Project, in which 24 researchers attempted to replicate social-science studies published between 2010 and 2015 in Nature and Science—the world’s top two scientific journals. The replicators ran much bigger versions of the original studies, recruiting around five times as many volunteers as before. They did all their work in the open, and ran their plans past the teams behind the original experiments. And ultimately, they could only reproduce the results of 13 out of 21 studies—62 percent.
As it turned out, that finding was entirely predictable. While the SSRP team was doing their experimental re-runs, they also ran a “prediction market”—a stock exchange in which volunteers could buy or sell “shares” in the 21 studies, based on how reproducible they seemed. They recruited 206 volunteers—a mix of psychologists and economists, students and professors, none of whom were involved in the SSRP itself. Each started with $100 and could earn more by correctly betting on studies that eventually panned out.
At the start of the market, shares for every study cost $0.50 each. As trading continued, those prices soared and dipped depending on the traders’ activities. And after two weeks, the final price reflected the traders’ collective view on the odds that each study would successfully replicate. So, for example, a stock price of $0.87 would mean a study had an 87 percent chance of replicating. Overall, the traders thought that studies in the market would replicate 63 percent of the time—a figure that was uncannily close to the actual 62-percent success rate.
The traders’ instincts were also unfailingly sound when it came to individual studies. Look at the graph below. The market assigned higher odds of success for the 13 studies that were successfully replicated than the eight that weren’t—compare the blue diamonds to the yellow diamonds.
Kevin P. emails me:
Suppose humanity becomes a multi-planet species. Does the percentage of people living in autocratic societies decrease or increase relative to what we see on our planet today? How do the time and resources required to travel between inhabited planets affect this?
Do some people on “free” planets work to help the non-free? More or less than such countries today? Is there some scale that is reached so a free Federation comes to guaranty freedom everywhere? Or maybe a tyrant or tyrants, once they have a couple wealthy planets under their belt are unstoppable because of cooperation difficulties of the individual free planets?
When I think of settling other planets, my base case is one of extreme scarcity and fragility, at least at first and possibly for a long time. Those are not the conditions that breed liberty, whether it is “the private sector” or “the public sector” in charge.
Maybe corporations will settle space for some economic reason. Then you might expect space living to have the liberties of an oil platform in the sea, or perhaps a cruise ship. Except there would be more of a “we are in this all together” attitude, which I think would favor a kind of corporate autocracy.
Another scenario involves a military settling space, possibly for military reasons, and that too is not much of a liberal or democratic scenario.
You might also have religiously-motivated settlements, which presumably would be governed by the laws and principles of the religion. Over time, however, this scenario might give the greatest chance for subsequent liberalization.
America developed to be as free as it did (at least for some people) mostly there was so much free land. Living standards were relatively high, and moving further westward was always an option. It is hard for me to think of an interplanetary version of the same condition. Easy exit and free resources don’t seem to go well together with the concept of space settlement.
Space stations and settlement will give the power to those who control the infrastructure, a bit like Wittvogel’s Oriental Despotism hypothesis, except with both air and water being scarce.
I thus expect that interplanetary settlements, whatever their other virtues, will not do much for liberalism or liberty. Here is my earlier post on The Moon is a Harsh Mistress.
…all qualified scientists would get some guaranteed funding — no grants required. But there should be one added step: everyone must anonymously allocate a fraction of their funds to other researchers of their own choosing.
The goal of this system would be to let scientists devote more of their time to research…
In SOFA [Self-Organizing Funding Allocation], every participant starts with the same allocation of funding every year but must allot a portion to other scientists. Reasons to select someone could range from, ‘That was a great paper’ to ‘I think they will release useful data.’ Those who get the most give the most, because scientists give a percentage of everything received under SOFA. To avoid currying favour, this process will be anonymous…
We can limit collusions and kickback schemes — the financial equivalent of citation cartels — by mandating a minimum number of recipients and restricting people from designating frequent collaborators, or colleagues at the same institution. Counteracting gender, age and prestige biases that plague conventional peer review might even be easier in SOFA because they are measurable.
Here is the Johan Bollen piece in Nature.
I was very happy with how this turned out, here is the audio and transcript. Here is how the CWTeam summarized it:
Michael Pollan has long been fascinated by nature and the ways we connect and clash with it, with decades of writing covering food, farming, cooking, and architecture. Pollan’s latest fascination? Our widespread and ancient desire to use nature to change our consciousness.
He joins Tyler to discuss his research and experience with psychedelics, including what kinds of people most benefit from them, what it can teach us about profundity, how it can change your personality and political views, the importance of culture in shaping the experience, the proper way to integrate it into mainstream practice, and — most importantly of all — whether it’s any fun.
He argues that LSD is underrated, I think it may be good for depression but for casual use it is rapidly becoming overrated. Here is one exchange of relevance:
COWEN: Let me try a very philosophical question. Let’s say I could take a pill or a substance, and it would make everything seem profound. My receptivity to finding things profound would go up greatly. I could do very small events, and it would seem profound to me.
Is that, in fact, real profundity that I’m experiencing? Doesn’t real profundity somehow require excavating or experiencing things from actual society? Are psychedelics like taking this pill? They don’t give you real profundity. You just feel that many things are profound, but at the end of the experience, you don’t really have . . .
POLLAN: It depends. If you define profundity or the profound as exceptional, you have a point.
One of the things that’s very interesting about psychedelics is that our brains are tuned for novelty, and for good reason. It’s very adaptive to respond to new things in the environment, changes in your environment, threats in your environment. We’re tuned to disregard the familiar or take it for granted, which is indeed what most of us do.
One of the things that happens on psychedelics, and on cannabis interestingly enough — and there’s some science on it in the case of cannabis; I don’t think we’ve done the science yet with psychedelics — is that the familiar suddenly takes on greater weight, and there’s an appreciation of the familiar. I think a lot of familiar things are profound if looked at in the proper way.
The feelings of love I have for people in my family are profound, but I don’t always feel that profundity. Psychedelics change that balance. I talk in the book about having emotions that could be on Hallmark cards. We don’t think of Hallmark cards as being profound, but in fact, a lot of those sentiments are, properly regarded.
Yes, there are those moments you’ve smoked cannabis, and you’re looking at your hand, and you go, “Man, hands, they’re f — ing incredible.” You’re just taken with this. Is that profound or not? It sounds really goofy, but I think the line between profundity and banality is a lot finer than we think.
COWEN: I’ve never myself tried psychedelics. But I’ve asked the question, if I were to try, how would I think about what is the stopping point?
For my own life, I like, actually, to do the same things over and over again. Read books. Eat food. Spend time with friends. You can just keep on doing them, basically, till you die. I feel I’m in a very good groove on all of those.
If you take it once, and say you find it entrancing or interesting or attractive, what’s the thought process? How do you model what happens next?
POLLAN: That’s one of the really interesting things about them. You have this big experience, often positive, not always though. I had, on balance . . . all the experiences I described in the book, with one notable exception, were very positive experiences.
But I did not have a powerful desire to do it again. It doesn’t have that self-reinforcing quality, the dopamine release, I don’t know what it is, that comes with things that we like doing: eating and sex and sleep, all this kind of stuff. Your first thought after a big psychedelic experience is not “When can I do it again?” It’s like, “Do I ever have to do it again?”
COWEN: It doesn’t sound fun, though. What am I missing?
POLLAN: It’s not fun. For me, it’s not fun. I think there are doses where that might apply — low dose, so-called recreational dose, when people take some mushrooms and go to a concert, and they’re high essentially.
But the kind of experience I’m describing is a lot more — I won’t use the word profound because we’ve charged that one — that is a very internal and difficult journey that has moments of incredible beauty and lucidity, but also has dark moments, moments of contemplating death. Nothing you would describe as recreational except in the actual meaning of the word, which is never used. It’s not addictive, and I think that’s one of the reasons.
I did just talk to someone, though, who came up to me at a book signing, a guy probably in his 70s. He said, “I’ve got to tell you about the time I took LSD 16 days in a row.” That was striking. You can meet plenty of people who have marijuana or a drink 16 days in a row. But that was extraordinary. I don’t know why he did it. I’m curious to find out exactly what he got out of it.
In general, there’s a lot of space that passes. For the Grateful Dead, I don’t know. Maybe it was a nightly thing for them. But for most people, it doesn’t seem to be.
COWEN: Say I tried it, and I found it fascinating but not fun. Shouldn’t I then think there’s something wrong with me that the fascinating is not fun? Shouldn’t I downgrade my curiosity?
POLLAN: [laughs] Aren’t there many fascinating things that aren’t fun?
COWEN: All the ones I know, I find fun. This is what’s striking to me about your answer. It’s very surprising.
W even talk about LSD and sex, and why a writer’s second book is the key book for understanding that writer. Toward the end we cover the economics of food, and, of course, the Michael Pollan production function:
COWEN: What skill do you tell them to invest in?
POLLAN: I tell them to read a lot. I’m amazed how many writing students don’t read. It’s criminal. Also, read better writers than you are. In other words, read great fiction. Cultivate your ear. Writing is a form of music, and we don’t pay enough attention to that.
When I’m drafting, there’s a period where I’m reading lots of research, and scientific articles, and history, and undistinguished prose, but as soon as I’m done with that and I’ve started drafting a chapter or an article, I stop reading that kind of stuff.
Before I go to bed, I read a novel every night. I read several pages of really good fiction. That’s because you do a lot of work in your sleep, and I want my brain to be in a rhythm of good prose.
Defininitely recommended, as is Michael’s latest book How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence.
That is a new paper by Mikko Packalen and Jay Bhattacharya, here is the abstract:
The National Institutes of Health (NIH) plays a critical role in funding scientific endeavors in biomedicine that would be difficult to finance via private sources. One important mandate of the NIH is to fund innovative science that tries out new ideas, but many have questioned the NIH’s ability to fulfill this aim. We examine whether the NIH succeeds in funding work that tries out novel ideas. We find that novel science is more often NIH funded than is less innovative science but this positive result comes with several caveats. First, despite the implementation of initiatives to support edge science, the preference for funding novel science is mostly limited to work that builds on novel basic science ideas; projects that build on novel clinical ideas are not favored by the NIH over projects that build on well-established clinical knowledge. Second, NIH’s general preference for funding work that builds on basic science ideas, regardless of its novelty or application area, is a large contributor to the overall positive link between novelty and NIH funding. If funding rates for work that builds on basic science ideas and work that builds on clinical ideas had been equal, NIH’s funding rates for novel and traditional science would have been the same. Third, NIH’s propensity to fund projects that build on the most recent advances has declined over the last several decades. Thus, in this regard NIH funding has become more conservative despite initiatives to increase funding for innovative projects.
Models developed for gross domestic product (GDP) growth forecasting tend to be extremely complex, relying on a large number of variables and parameters. Such complexity is not always to the benefit of the accuracy of the forecast. Economic complexity constitutes a framework that builds on methods developed for the study of complex systems to construct approaches that are less demanding than standard macroeconomic ones in terms of data requirements, but whose accuracy remains to be systematically benchmarked. Here we develop a forecasting scheme that is shown to outperform the accuracy of the five-year forecast issued by the International Monetary Fund (IMF) by more than 25% on the available data. The model is based on effectively representing economic growth as a two-dimensional dynamical system, defined by GDP per capita and ‘fitness’, a variable computed using only publicly available product-level export data. We show that forecasting errors produced by the method are generally predictable and are also uncorrelated to IMF errors, suggesting that our method is extracting information that is complementary to standard approaches. We believe that our findings are of a very general nature and we plan to extend our validations on larger datasets in future works.
That is from A. Tacchella, D. Mazzilli, and L Pietronero in Nature. Here is a Chris Lee story about the piece. Via John Chamberlin.
Here is the whole post, here is one excerpt:
If you’re 10–20: These are prime years!
- Go deep on things. Become an expert.
- In particular, try to go deep on multiple things. (To varying degrees, I tried to go deep on languages, programming, writing, physics, math. Some of those stuck more than others.) One of the main things you should try to achieve by age 20 is some sense for which kinds of things you enjoy doing. This probably won’t change a lot throughout your life and so you should try to discover the shape of that space as quickly as you can.
- Don’t stress out too much about how valuable the things you’re going deep on are… but don’t ignore it either. It should be a factor you weigh but not by itself dispositive.
- To the extent that you enjoy working hard, do. Subject to that constraint, it’s not clear that the returns to effort ever diminish substantially. If you’re lucky enough to enjoy it a lot, be grateful and take full advantage!
- Make friends over the internet with people who are great at things you’re interested in. The internet is one of the biggest advantages you have over prior generations. Leverage it.
- Aim to read a lot.
- If you think something is important but people older than you don’t hold it in high regard, there’s a decent chance that you’re right and they’re wrong. Status lags by a generation or more.
- Above all else, don’t make the mistake of judging your success based on your current peer group. By all means make friends but being weird as a teenager is generally good.
This paper explores the physics of the what-if question “what if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?”. While the assumption may be absurd, the consequences can be explored rigorously using elementary physics. The result is not entirely dissimilar to a small ocean-world exo-planet.
Here is the full analysis, via M.
Here is the transcript and audio, I am very pleased (and honored) to have been able to do this. She is an autism researcher, and so most of the discussion concerned autism, here is one excerpt:
COWEN: What would be the best understanding of autism, from your perspective?
DAWSON: The best understanding is seeing autism as atypical brain functioning, resulting in atypical processing of all information. So that’s information across domains — social, nonsocial; across modalities — visual, auditory; whatever its source, whether it’s information from your memory, information coming from the outside world, that is atypical. So that is very domain-general atypicality.
What autistic brains do with information is atypical. How it’s atypical, in my view, involves what I’ve called cognitive versatility and less mandatory hierarchies in how the brain works, such that, for example, an autistic brain will consider more possibilities, will nonstrategically combine information across levels and scales without losing large parts of it, and so on. And that applies to all information.
That is strictly my view. I’m not sure anyone would agree with me.
COWEN: Now often, in popular discourse, you’ll hear autism or Asperger’s associated with a series of personality traits or features of personality psychology — a kind of introversion or people being nerdy in some regard. In your approach, do you see any connection between personality traits and autism at all?
DAWSON: There is a small literature that shows some connection. I think it’s very weak, and I say no, I don’t think autism is about personality. Autism is sort of orthogonal to personality. The two are not related. Whatever relation there is does not . . . arises from some third factor, let’s say. If there is one — and again, the evidence is, I think, very weak connecting autism to personality — so just say that maybe, if there’s something, let’s say that personality in autistics might be more high variance. That would be my totally wild guess, but I don’t think autism itself is about personality.
And here is Michelle again:
We don’t — I hope we don’t look at a blind person who is a successful lawyer and assume that he is only very mildly blind or barely blind at all, and then look at a blind person who has a very bad outcome and assume that they must be very severely blind.
We do make those kinds of judgments in autism, saying, “The more atypical the person is, the worse they must be in some sense.” That kind of bias has not only harmed a lot of autistic people, it really has impeded research.
Here is Michelle on Twitter. We discuss and link to some of her research in the discussion.
It is a short essay, here are a few scattered bits:
The real output of the US manufacturing sector is at a lower level than before the 2008 recession; that means that there has not been real growth in US manufacturing for an entire decade. (In fact, this measure may be too rosy—the ITIF has put forward an argument that manufacturing output measures are skewed by excessive quality adjustments in computer speeds. Take away computers, which fewer and fewer people are buying these days, and US real output in manufacturing would be meaningfully lower.) Manufacturing employment peaked in 1979 at nearly 20 million workers; it fell to 17 million in 2000, 14 million in 2008, and stands at 12 million today. The US population has grown by 40% since 1979, while the number of manufacturing workers has nearly halved.
I think we should try to hold on to process knowledge.
Japan’s Ise Grand Shrine is an extraordinary example in that genre. Every 20 years, caretakers completely tear down the shrine and build it anew. The wooden shrine has been rebuilt again and again for 1,200 years. Locals want to make sure that they don’t ever forget the production knowledge that goes into constructing the shrine. There’s a very clear sense that the older generation wants to teach the building techniques to the younger generation: “I will leave these duties to you next time.”
There’s an entertaining line in the Brad Setser piece I linked to earlier. He tells us that one of the reasons that the US has such a high surplus in the services trade is that Americans have a low propensity to travel abroad. I don’t view that as a great way to earn a trade surplus.
There is much more at the link.