Category: Economics

AI, Unemployment and Work

Imagine I told you that AI was going to create a 40% unemployment rate. Sounds bad, right? Catastrophic even. Now imagine I told you that AI was going to create a 3-day working week. Sounds great, right? Wonderful even. Yet to a first approximation these are the same thing. 60% of people employed and 40% unemployed is the same number of working hours as 100% employed at 60% of the hours.

So even if you think AI is going to have a tremendous effect on work, the difference between catastrophe and wonderland boils down to distribution. It’s not impossible that AI renders some people unemployable, but that proposition is harder to defend than the idea that AI will be broadly productive. AI is a very general purpose technology, one likely to make many people more productive, including many people with fewer skills. Moreover, we have more policy control over the distribution of work than over the pure AI effect on work. Declare an AI dividend and create some more holidays, for example.

Nor is this argument purely theoretical. Between 1870 and today, hours of work in the United States fell by about 40% — from nearly 3,000 hours per year to about 1,800. Hours fells but unemployment did not increase. Moreover, not only did work hours fall, but childhood, retirement, and life expectancy all increased. In fact in 1870, about 30% of a person’s entire life was spent working — people worked, slept, and died. Today it’s closer to 10%. Thus in the past 100+ years or so the amount of work in a person’s lifetime has fallen by about 2/3rds and the amount of leisure, including retirement has increased. We have already sustained a massive increase in leisure. There’s no reason we cannot do it again.

Financial Regulation and AI: A Faustian Bargain?

Important work is just flowing these days, and much of it (of course) concerns AI:

We study whether AI methods applied to large-scale portfolio holdings data can improve financial regulation. We build a state-of-the-art, graph-based deep learning model tailored to security-level data on the holdings of financial intermediaries. The architecture incorporates economic priors and learns latent representations of both assets and investors from the network structure of portfolio positions. Applied to the universe of non-bank financial intermediaries, covering nearly $40 trillion in wealth, the model substantially outperforms existing approaches in out-of-sample forecasts of intermediary trading behavior, including in crisis episodes. The model has more than ten times the explanatory power for the cross-sectional variation in asset returns during stress events compared to traditional approaches, and it outperforms existing systemic risk metrics at the institution level. Its learned representations show that the holdings network encodes rich, economically interpretable information about firesale vulnerability. The architecture is fully inductive, producing informative estimates even when entire asset classes or investors are withheld from training. We embed our empirical approach into a macroprudential optimal policy framework to formalize why these objects matter for policy and welfare. We show that even in an equilibrium environment subject to the Lucas critique, the predictive information from the model improves welfare by sharpening the cross-sectional targeting of policy interventions, and we demonstrate a complementarity between prediction and structural knowledge.

That is a new paper by Christopher Clayton and Antonio Coppola, of Yale and Stanford respectively.

Herbert Hoover is still underrated

We study the effects of large-scale humanitarian aid using novel data from the American Relief Administration’s (ARA) intervention during the 1921-1922 famine in Soviet Russia. We find that the allocation of relief closely tracked underlying food scarcity and was uncorrelated with subnational politics. We show that ARA rations reduced food prices, raised caloric intake, lowered the prevalence of relapsing fever, and increased rural birth cohorts. The aid benefited poorest peasants most and proved most effective in provinces with higher levels of human capital. Back-of-the-envelope calculations suggest that, absent ARA relief, the 1926 population would have been 4.4 million lower.

That is from a new paper by Natalya Naumenko (my colleague), Volha Charnysh, and Andrei Markevich.

Stephen Pimentel has an excellent review of *The Marginal Revolution*

Here is one very good paragraph of many:

Cowen is excellent on the question of why the marginalist insight had to wait so long, and why it eventually came in a simultaneous eruption across countries and three intellectual temperaments. The answer involves the slow assembly of preconditions: advances in calculus, the rise of statistical thought, the professionalization of economics as a discipline, and certain changes in the philosophy of science associated with the Victorian debate between inductive and deductive methods. Progress in science, Cowen suggests, is rarely a matter of the lone genius, but rather of the alignment of previously dispersed elements. The genius arrives when the ground has been prepared to receive the insight.

And another:

There is a discomforting codicil to all of this. Perhaps, Cowen suggests near the book’s end, the intuitions of 20th-century microeconomics were always a kind of compensation for a deeper ignorance. Perhaps we elevated intuitive reasoning, with its clean parables of marginal utility, and elegant supply-and-demand diagrams, because they were what we had, and we mistook their availability for adequacy. Machine learning models that find hundreds of thousands of factors in financial data are not exactly refuting marginalism. They are revealing the scale of what marginalism was never equipped to see. Our intuitions were always a small corner of understanding, swimming in a larger froth of epistemic chaos. The illusion has been stripped bare.

Here is the full review.  Here is the book itself.  Via Mike Doherty.

Andy Hall advice on AI and economic research

Here is the document, excerpt:

In January, I released the results of an experiment showing how Claude Code could helpfully extend old papers “automagically.” It was pretty astonishing to me. Claude was able to come up with a plan, scrape the web, write code, run regressions, create tables and figures, and write a whole memo on what it had found—all in about 45 minutes.

Are AI tools perfect? No. Claude made some interesting mistakes in that extension, and since then, I’ve seen it make a whole bunch more. Are human researchers perfect, though? Hell no. 

The evidence that AI tools should now be an essential part of your toolkit is overwhelming—look at the recent work that my Stanford colleague Yiqing Xu has put out, for example, which allows for the automated verification of empirical research. This is so clearly valuable. When it comes to empirical work, we’re never going back to the pre-AI world.

Here is a thread on the paper, heedworthy throughout.  If you do not have some kind of decent plan here, other economists will leave you in the dust.  Even if it is only a minority of “other economists” their total leverage and impact will be extreme.

Ludwig Straub wins the Clark medal

Here is his home page:

Ludwig Straub is a professor of economics at Harvard University. His research areas are macroeconomics and international economics. Among his topics of interest are the recent decline in the natural rate of interest, rising levels of private and public debt, and the transmission of monetary and fiscal policy. Ludwig also has an active research agenda solving and analyzing heterogeneous-agent models. Among his most recent papers is a 2025 paper studying the short-run effects of tariff shocks.

Here is his Google Scholar page.  The Medal citation gives an overview of his work.  Congratulations!

Why do Americans No Longer Work So Much More Than Non-Americans?

In the 1990s, Americans used to work much more than non-Americans. Nowadays, about half of the gap in hours worked has reversed. To evaluate the convergence of working hours, we develop a tractable model of labor supply enriched with multiple sources of heterogeneity across individuals, an extensive margin of participation, multi-member households, and an elaborate system of taxes and benefits upon non-employment. Using detailed measurements from micro-level and aggregate datasets, we identify model parameters and sources of heterogeneity across individuals for various countries. We run a horse race between competing explanations and find that U.S. hours per person declined after 2000 owing mainly to the rise of government health benefits provided to the non-employed. Non-U.S. countries have generous benefits for the non-employed, but this generosity has not changed as much over time as in the United States, and public health coverage does not depend on employment status or income levels. For these countries, the rise of labor supply is generally accounted for by a mix of factors, such as the rise of wages and the falling disutility of work.

That is from a new NBER working paper by Serdar Birinci, Loukas Karabarbounis & Kurt See.

The Public Choice Outreach Conference!

The annual Public Choice Outreach Conference is a crash course in public choice. The conference is designed for undergraduates and graduates in a wide variety of fields. It’s entirely free. Indeed scholarships are available! The conference will be held Friday June12- Sunday June 14 , near Washington, DC in Reston, VA. Lots of great speakers including Tyler, myself, Bryan Caplan, Robin Hanson, Jon Klick, Shruti Rajagopalan and more.

Please apply and encourage your students to apply.

Migrant Income and Long-Run Economic Development

We study how international migrant income prospects affect long-run development in origin areas. We leverage the 1997 Asian Financial Crisis exchange rate shocks in a shift-share identification strategy across Philippine provinces. Initial migrant income shocks are magnified six-fold over time, increasing domestic income, education levels, migrant skills, and high-skilled migration. Remarkably, 74.9 percent of long-run income gains come from domestic rather than migrant income. Trade driven impacts of exchange rate shocks are orthogonal to effects via migrant income. A structural model reveals that 19.7 percent of long-run income gains stem from educational investments. International migration fosters broad economic development in origin communities.

That is from a recent AER piece by Gaurav Khanna, Emir Murathanoglu, Caroline Theoharides, and Dean Yang.  Here is a good thread on the piece.

The CA Minimum Wage Increase: Summing Up

Two recent joint-papers Did California’s Fast Food Minimum Wage Reduce Employment? by Clemens, Edwards and Meer and The Effects of California’s $20 Fast Food Minimum Wage on Prices by Clemens, Edwards, Meer and Nguyen give what I think is a plausible and consistent account of California’s $20 fast food minimum wage.

California’s $20 fast food minimum wage raised wages in the sector by roughly 8 percent relative to the rest of the country but employment fell by 2.3 to 3.9 percent (depending on specification, median ~3.2%), translating to about 18,000 lost jobs. Food away from home (FAFH) prices in California’s four CPI-reporting MSAs rose 3.3–3.6 percent relative to 17 control MSAs. Falsification tests on Food at Home and All Items Less Food and Energy show zero differential movement—this is specific to restaurant prices.

What’s interesting is that the papers are independently estimated but the fit is consistent. The price paper uses Andreyeva et al.’s demand elasticity of -0.8 to convert the estimated price increases into an implied quantity declines: about 3.9–4.1 percent in limited-service and 1.7–1.8 percent in full-service. These align well with the employment declines of 3.2 and 2.1 percent estimated in the first paper.

The consistency tells us something about the mechanism. One thing we have learned about the minimum wage in recent years is that the pass-through effect is large and more of the employment decline is driven by pass through than by labor-capital substitution. In other words, prices rose, quantity demanded fell, and that’s what killed the jobs—not robots replacing workers. Not today, anyway.

In terms of welfare, the bulk of employed workers get an 8% wage increase, a small minority get disemployed. The big transfer was from consumers to workers. California has roughly 39 million residents, all of whom face 3.3–3.6% higher FAFH prices. The transfer is likely regressive — lower-income households spend a larger budget share on fast food specifically. So the policy effectively taxes low-income consumers generally to raise wages for a subset of low-income workers, while eliminating jobs for another subset. Your mileage may vary but I don’t see this as a big win for workers. We thought small increases in the minimum wage were absorbed–maybe some were or maybe they were just hard to estimate–but you can’t extrapolate the small  increases to big ones–the effect is non-linear. Big increases in the minimum wage start to bite.

As usual, when it comes to fast food there is no such thing as a free lunch.

Addendum: Clemens’s JEP paper continues to be the masterclass in how to think through minimum wage issues.

Economic growth and the rise of large firms

Rich and poor countries differ in the size distribution of business firms. This paper shows that the right tail of the firm size distribution systematically grows thicker with economic development, both within countries over time and across countries. The author develops a simple idea search model with both endogenous growth and an endogenous firm size distribution. The economy features an asymptotic balanced growth path. Along the transition, Gibrat’s law holds at each date, and the right tail of the firm size distribution becomes monotonically thicker. The firm size distribution converges to Zipf’s distribution. The model also implies that policies favouring large firms can improve welfare due to the externality associated with idea search. Finally, the author extends the results obtained in the simple model to a general class of idea search models. Under common functional form assumptions, this model stands out as the only model within that class that is consistent with both Gibrat’s law and a thickening right tail.

That is by Zhang Chen, and a revised version will be appearing in Econometrica.

Advice for economics graduate students (and faculty?) vis-a-vis AI

From Isiah Andrews, via Emily Oster and the excellent Samir Varma.  A good piece, though I think it needs to more explicitly consider the most likely case, namely that the models are better at all intellectual tasks, including “taste,” or whatever else might be knockin’ around in your noggin…I am still seeing massive copium.  But the models still are not able to “operate in the actual world as a being.”  Those are the complementarities you need to be looking for, namely how you as a physical entity can enhance the superpowers of your model, or should I express that the other way around?  That might include gathering data in the field, persuading a politician, or raising money.  I am sure you can think of examples on your own.

The President(s) Fought the Law and the Law Won

In our textbook, Modern Principles, Tyler and I emphasize that Congress and the President are subject to a higher law, the law of supply and demand. In an excellent column, Jason Furman gives a clear example of how difficult it is to fight the law of inelastic demand:

…Today a given number of autoworkers can make, according to my calculations, three times as many cars in a year as they could 50 years ago.

The problem is that consumers do not want three times as many cars. Even as people get richer, they increase their spending on manufactured goods only modestly, preferring instead to spend more on services like travel, health care and dining out. There are only so many cars a family can own, but that’s not the case for expensive vacations or fancy meals. As a result we have fewer people working in auto factories and more people working in luxury resorts and the like.

These forces — rising productivity but steady demand — explain why the United States was losing manufacturing job share as far back as the 1950s and 1960s, long before trade became a major factor.