U.S.A. fact of the day
New Penn-Wharton study shows per-capita federal spending on each age group:
Seniors: $43,700
Children and young adults: $4,300.
Here is more from Jessica Riedl.
My very interesting Conversation with Arthur C. Brooks
Here is the audio, video, and transcript. Here is part of the episode summary:
Tyler and Arthur cover how scarcity makes savoring possible and why knowing you’ll die young sharpens the mind, what twin studies tell us about the genetics of well-being and why that’s not actually depressing, the four habits of the genuinely happy, the placebo theory of happiness books, curiosity as an evolved positive emotion, the optimal degree of self-deception, why Arthur chose Catholicism rather than Orthodoxy, what the research says about accepting death, how he became an economist via correspondence school, AI’s effect on think tanks, the future of classical music, whether Trumpism or Reaganism is the equilibrium state of American conservatism, whether his views on immigration have changed, what he and Oprah actually agree on, which president from his lifetime he most admires, Barcelona versus Madrid, what 60-year-olds are especially good at, why he’s reading Josef Pieper, how he’ll face death, and much more.
Excerpt:
COWEN: What do you think of the view that books on happiness or the meaning of life, they’re a kind of placebo? They don’t help directly, but you feel you’ve done something to become happier, and the placebo is somewhat effective.
BROOKS: I think that there’s probably something to that, although there’s some pretty interesting new research that shows that the placebo effect is actually not real. Have you seen some of that new research?
COWEN: Yes, but I don’t believe it. Nocebos also seem to work in many situations.
BROOKS: I know. I take your broader point. I take your broader point. I think that the reason for that is that when people read most of the self-improvement literature, not just happiness literature, what happens is that they get a flush of epiphany, a new way of thinking. That feels really good. That feels really inspirational. The problem is it doesn’t take root.
It’s like the seeds that are thrown on a path in the biblical parable. They don’t go through the algorithm that I just talked about, and so not all of these things can be compared. I would not have gotten into this line of research and this line of teaching if I thought that it was just going to add another book to a long line of self-improvement books that make people feel good but don’t ultimately change their lives.
COWEN: Say a person reads a new and different book on happiness once a year at the beginning of the year. Now, under the placebo view, that’s a fine thing to do. It’ll get you a bit happier each year. Under your view, it seems there’s something wrong. Isn’t the placebo view doing a bit better there? You should read a book on happiness every year, a different one. It’ll revitalize you a bit. Whether or not it’s new only matters a little.
BROOKS: Yes. It might remind you of some things that you knew to be the truth that you had fallen away from. One of the things that I like to do is I like to read a good book by one of the church fathers, for example. They’re more or less saying the same thing. It reminds me of something that I learned as a boy and that I’ve forgotten as an adult. It might actually remind me to come back to many of these practices and many of these views.
I think that there are real insights. There’s real value that can come from science-based knowledge about how to live a better life. I think that you and I are both dedicated to science in the public interest and also science in the private interest as well. I think there is some good to be gotten through many of these ideas. Not all. Once again, not all happiness literature is created equal.
And:
COWEN: Why not cram all that contemplation of death into your last three months rather than your last 18 months? Do intertemporal substitution, right? Accelerate it. Ben Sasse probably is facing a pretty short timeline, but he’s done a remarkable job, even publicly, of coming to terms with what’s happening. Isn’t that better than two years of the same?
And:
COWEN: I think it’s fair to say what we call the right wing in America, it’s become much, much more Trumpy. Does this shift you to the left or make you question what the right wing was to begin with, or do you just feel lost and confused, or do you say, that’s great, I’m more Trumpy, too? How have you dealt with that emotionally and intellectually?
BROOKS: Yes. I’ll answer, but you’re going to have to answer after me, will you?
COWEN: Sure.
Interesting throughout.
MRU high school fellowship
How Matthias Blübaum can win it all
He is playing in the current Candidates tournament as the lowest-rated player, a mere 2693. It is considered a semi-miracle that he qualified at all, and he is not given much chance of winning the tourney.
And yet a path to the top remains.
First, he has not lost any of his first four games (all are draws), so he is hardly a weakie.
Second, and for my purposes more importantly, the tournament has winner-take-all rewards. So many players will be taking chances to try to move into the lead. Yet in chess positive expected value big chances are hard to come by, so often players, in their determination to top the standings, will take modestly negative expected value big chances, especially in the opening phase of the game.
Now, if you are willing to take a negative expected value big chance, will you prefer to do so against the top players in the tourney, such as Caruana, or the lower-rated players, such as Blübaum? The answer is obvious.
So he will have his chances.
My podcast with Russ Roberts on AI and education
On his EconTalk podcast, self-recommending…
Wednesday assorted links
1. NYT on Morton Feldman. Is he the most important American composer?
2. Transcript and video of Cass Sunstein lecture on Hayek at Mercatus.
3. Baseball cards for talents.
5. Regulating AI agents. And traps for AI agents.
6. How the Iranian government uses patronage to stay in power (WSJ). And how the Iranian economy is surviving wartime pressures (FT).
How to Make Judges and Referees Pay
A recent viral tweet, quoted by Elon Musk, points out that bartenders can be fined or even imprisoned if they serve alcohol to patrons who later kill someone while under the influence. Judges, in contrast, enjoy absolute or qualified immunity even when they repeatedly release defendants who go on to kill.
I agree that judges should face stronger incentives to make good decisions, but the obvious problem with penalizing judges who release people who later commit crimes is that judges would then have very little incentive to release anyone—and that too is a bad decision. Steven Landsburg solved this problem in his paper A Modest Proposal to Improve Judicial Incentives, published in my book Entrepreneurial Economics.
Landsburg’s solution is elegant: we must also pay judges a bounty when they release a defendant.
Whether judges would release more or fewer defendants than they do today would depend on the size of the cash bounty, which could be adjusted to reflect the wishes of the legislature. The advantage of my proposal is not its effect on the number of defendants who are granted bail but the effect on which defendants are granted bail. Whether we favor releasing 1 percent or 99 percent, we can agree that those 1 percent or 99 percent should not be chosen randomly. We want judges to focus their full attention on the potential costs of their decisions, and personal liability has a way of concentrating the mind.
One might object that a cash bounty will cost too much, but recall that the bounty is balanced by penalties when a released defendant commits a future crime. The bounties and penalties can be calibrated so that on average the program is budget-neutral. The key is to get the incentives right on the margin.
The structure of this problem is quite general. Ben Golub, for example, writes:
There should be a retrospective reputational penalty imposed on referees who vote no on a paper because the paper is too simple technically — if that paper ends up being important. It’s an almost definitional indicator of bad judgment.
Quite right, but a penalty for rejection needs to be balanced with a bonus for acceptance. Get the marginal incentive right and quality will follow!
Economists on AI and economic growth and employment
We completed the most comprehensive study of how economists and AI experts think AI will affect the U.S. economy. They predict major AI progress—but no dramatic break from economic trends: GDP growth rates similar to today’s and a moderate decline in labor force participation. However, when asked to consider what would happen in a world with extremely rapid progress in AI capabilities by 2030, they predict significant economic impacts by 2050:
• Annualized GDP growth of 3.5% (compared to 2.4% in 2025)
• A labor force participation rate of 55% (roughly 10 million fewer jobs)
• 80% of wealth held by the top 10% (highest since 1939)
That is from this very good and very detailed Twitter thread, worth reading in its entirety. Note this:
Only 5.2% of the variance is between scenarios—attributable to disagreement about AI capabilities themselves…
Here is the full paper, over 200 pages long, I will be reading through it. The list of authors is impressive, with Ezra Karger in the lead, also including Kevin Bryan, Basil Halperin, and many more. For some while this will stand as the best set of estimates we have. Here are the related forecasts of Seb Krier.
Is financial economics still economics?
That all sounded wonderful, and that core model and its offshoots dominated financial research for decades. The problem, however, was that it wasn’t true, or at least it wasn’t nearly as true as we had thought and hoped. When financial economists refined the models with more complete specifications, it turned out Beta didn’t predict stock returns much at all. Eugene Fama and Kenneth French delivered one of the final blows to earlier approaches with a 1992 paper that showed Beta didn’t have explanatory power over expected returns at all. Since Fama himself was one of the original architects of CAPM-like reasoning, and French also was a renowned finance economist, these revisions to the model were credible. For all its original promise, marginalism, and the concomitant notion of diminishing marginal utility, no longer seemed to help explain asset returns.
Under one plausible account of intellectual history, you can date the decline of marginalism to that 1992 paper. In the most rigorous, data-oriented, and highest-paying field of economics, namely finance, marginalist constructs had every chance to succeed. In fact, they ran the board for several decades. But over time they failed. In the most prestigious field of economics, marginalism has been in full retreat for over 30 years, and it shows no signs of making a comeback.
We already know that financial practice is dominated by the (non-economist) quants. But how about financial economics research, the parts that are still done by economists? What direction is that work moving in?
I was struck by a 2024 paper published in the Journal of Financial Economics, one of the two leading journals of financial economics (Journal of Finance is the other). The authors are Scott Murray, Yusen Xia, and Houping Xiao, and the title is “Charting by Machines.” The core result is pretty simple, and best expressed in the well-written abstract:
“We test the efficient market hypothesis by using machine learning to forecast stock returns from historical performance. These forecasts strongly predict the cross-section of future stock returns. The predictive power holds in most subperiods and is strong among the largest 500 stocks. The forecasting function has important nonlinearities and interactions, is remarkably stable through time, and captures effects distinct from momentum, reversal and extant technical signals. These findings question the efficient market hypothesis and indicate that technical analysis and charting have merit. We also demonstrate that machine learning models that perform well in optimization continue to perform well out-of-sample.” Murray, Xia, and Xiao (2024, p. 1). Or consider the new paper Borri, Chetverikov, Liu, and Tsyvinski (2024). They propose a new non-linear, single-factor asset pricing model. In the abstract: “Most known finance and macro factors become insignificant controlling for our single-factor.” Yet you won’t find traditional economic variables discussed in this paper, it is all about the math, in particular a representation of the Kolmogorov-Arnold representation theorem.
In other words, the successful approach to predicting returns is giving up on traditional portfolio theory and using the “theory-less” technique of machine learning. Although this is published in the Journal of Financial Economics, in some significant sense it is not economic reasoning at all. It is calculation, combined with expertise in math and computer science. The modeling is not economic modeling in a manner that has ties to marginalism or standard intuitive microeconomic theory. And the work is predicting excess returns in a pretty robust and successful way…
There is a recent working paper which is perhaps more striking yet, by Antoine Didisheim, Shikun (Barry) Ke, Bryan T. Kelly, and Semyon Malamud. They pick up from Arbitrage Pricing Theory (APT), a well-established idea from financial economics. APT typically looks for “factors” in the data which predict excess returns, and a traditional APT model might have found five or six such factors. Are “inflation” or perhaps “the term structure of interest rates” useful factors? Well, that can be debated, but if so, those results sound pretty intuitive. But those intuitions seem to be disappearing. In a paper by these authors, they apply machine learning methods to look for more factors. As we know, machine learning is very good at finding non-obvious relationships in the data. The largest model they built has 360,000 (!) factors, and it reduces pricing errors by 54.8 percent relative to the classic six-factor model from Fama and French. Bravo to the authors, but what kinds of intuitions do you think possibly can be supported by those 360,000 factors?
That is from my new The Marginal Revolution: Rise and Decline, and the Pending Revolution in AI.
The economics of dropout risk
Bryan Caplan keeps hammering this point home, it is good to see follow-up work:
In the United States, college dropout risk is sizable. We provide new empirical evidence that beliefs about the likelihood of earning a bachelor’s degree predict college enrollment, and that the distribution of these beliefs exhibits widespread optimism. We incorporate this distribution of beliefs into an overlapping generations model with college as a risky investment that can be financed via federal loans, grants, family transfers, or earnings. We then examine the welfare impact of access to federal student loans. We find that access can reduce welfare for young adults who are low-skilled, poor, and optimistic, due to their mistaken beliefs.
That is from AEJ: Macroeconomics, by Emily G. Moschini, Gajendran Raveendranathan, and Ming Xu. Via the excellent Kevin Lewis.
I appear on the Coleman Hughes podcast
Tuesday assorted links
1. “… presenting Economics as empirical and socially relevant may broaden the profile of those who consider the field.” But it does not get more people interested.
2. Youth happiness has been rising in many places, possibly most.
3. The NIH as an implicit regulatory body.
4. The Lebron critique of prediction markets.
5. Adam Tooze: “It is a truism of the moment that China is the last adult in the room.”
6. Quantum breakthroughs? And another account. Will the Satoshi wallet remain safe?
7. Shall we organize scientific literatures around claims rather than papers?
A reminder (for academics)
Yes, there are skills AIs haven’t mastered. But if your skill still appears to be the exclusive province of humans, that might mean the major AI companies do not yet consider it very important to master right away. Eventually it will rise to the top of the list.
Here is more from my Free Press essay on AI. If not for the copied passage, it seems no one was noticing this book review? (NYT, read the emendation)
New issue of Econ Journal Watch
EJW Volume 23, Issue 1, March 2026
Specification Searching in the Race between Education and Technology: Joseph Francis criticizes a canonical model of the American labor market, which has been used to advocate for more funding for education to reduce inequality. He shows how the model has routinely failed to predict the evolution of the college wage premium. Ad hoc econometric adjustments have been necessary to make the model fit the data, most notably in Claudia Goldin and Lawrence F. Katz’s well-known book. (The commented-on authors are hereby invited to reply in a future issue.)
Globalization and the China Shock: A Reassessment: David Autor, David Dorn, and Gordon Hanson estimated the effect of imports of manufactured goods from China from 1990 to 2007 on employment, wages, and social welfare payments in the USA, concluding that imports from China reduced manufacturing employment and lowered wages of workers in non-manufacturing industries. Robert Kaestner argues that the authors’ focus only on Chinese imports, which are correlated with imports from other countries and likely other omitted variables, muddles the interpretation and usefulness of their results. Kaestner argues that their estimates do not measure the effect of Chinese imports on employment and wages holding all other things equal, and do not even measure the broader equilibrium effect of Chinese imports on outcomes that includes changes in imports from other countries. Overall, the evidence suggests that omitted variable bias is likely, which renders their estimates uninformative. (The commented-on authors are hereby invited to reply in a future issue.)
Learning on machine learning on the housing supply impact of land use reforms: An Urban Studies article reports relatively modest housing-stock gains from liberalization, based on a dataset of reforms identified via machine learning applied to newspaper coverage. Researchers at the American Enterprise Institute challenge the article’s methodology and conclusions, and the Urban Studies authors respond.
An Article in Science on Covid Origins Contains a Fundamental Error: An influential article claimed that Bayesian analysis of the molecular phylogeny of early SARS-CoV-2 cases indicated that the likelihood that two successful introductions to humans had occurred was greater than the likelihood that just one had occurred. After correcting a fundamental error in Bayesian reasoning, the results presented in that paper imply larger likelihood for a single introduction, reducing the plausibility of the wet-market zoonosis account of Covid’s origins. (The commented-on authors were invited to reply and the invitation remains open.)
A Critique of Synthetic Control Method Studies on Covid-19 Policy—Evidence from Sweden: Five studies employing the Synthetic Control Method (SCM) conclude that Sweden would have experienced lower mortality had it imposed a mandatory lockdown in early 2020. Dividing Sweden into four hypothetical countries based on winter holiday timing—a proxy for pre-lockdown viral seeding—Jonas Herby shows that the estimated lockdown effect varies dramatically across regions with identical policies, suggesting SCM captures variation in viral spread rather than a causal policy effect. Sweden’s low excess mortality in the end suggests that Sweden’s state epidemiologist, Anders Tegnell, was right all along. (The commented-on authors are hereby invited to reply in a future issue.)
Central Banking Research Is Increasingly Directed to Environment, Inequality, Gender, and Race: Radu Șimandan and Cristian Valeriu Păun use the Scopus database to show how environment, inequality, gender, and race have soared as topics in research outlets supposedly focused on money and banking. They discuss the hazards of subverting price stability and other traditional central bank mandates.
Power Analysis Is Essential—A Case Study in Rounded Shapes: A Journal of Consumer Research article reported an A/B test where simply rounding the corners of square buttons increased click-through rate by 55 percent, but provided no power analysis. Ron Kohavi and coauthors show that the original study was highly underpowered. They report that three high powered A/B replications, each over two thousand times larger, had estimated effects approximately two orders of magnitude smaller than initially claimed. (The commented-on authors are hereby invited to reply in a future issue.)
“Impartial spectator” in Adam Smith’s The Theory of Moral Sentiments: In the previous issue, a critique alleged that numerous scholars flatten Smith’s “impartial spectator.” Jack Weinstein responds with “Adam Smith’s Impartial Spectator Is Neither Divine Nor an Ideal Observer,” and the critics renew their case against flattening “impartial spectator.”
The Ideological Profile of France’s Economic Bestsellers: Alexis Sémanne inspects the 100 economics bestsellers for 2024, as listed by a leading French bookseller. He develops seven categories and evaluates each book for its ideological tendency. Quite few of the books offer a freedom-oriented perspective.
Green Vanities in Europe: John Constable reviews A Green Entrepreneurial State? Exploring the Pitfalls of Green Deals, edited by Magnus Henrekson, Christian Sandström, and Mikael Stenkula, a book which reveals more than the fact that green deals in Europe have been failures.
EJW thanks its referees and others who contribute to its mission.
EJW Audio:
- Michael Weissman on Lab Leak and Science
- Dan Johansson on Economics without Entrepreneurship and Institutions
- Henry Hardy on Isaiah Berlin
Sentences to ponder
This matters for the AI question, and the book leaves it unfinished. If the breakthroughs of the past required social conditions, not just cognitive capacity, then what does it mean when the next breakthroughs are produced by systems that have no social conditions at all? A neural net does not need a university chair or financial independence from the church. It does not need to reorganize its commitments. It does not, in any recognizable sense, have commitments. The machine that replaces the marginalist is not a better marginalist. It is a different kind of thing entirely.
That is from Jônadas Techio, presumably with LLMs, this review of The Marginal Revolution is interesting throughout. And this:
Maybe the book demonstrates only that Cowen personally remains good at something the field no longer needs.