Category: Data Source
International Comparison of Physician Incomes
We compare physician incomes using tax data from the United States, Canada, Sweden, and the Netherlands. Physicians are concentrated in the top percentiles of the income distribution in all four countries, especially in the United States and certain specialties. Physician incomes are highest in the United States, and a decomposition shows that this mainly reflects differences in overall income distributions, rather than physicians’ locations in those distributions. This suggests that broader labor market differences, and thus physicians’ outside options, drive absolute incomes. Shifting US physicians’ incomes to match relative positions in other countries’ distributions would only marginally reduce healthcare spending.
By Aidan Buehler, et.al., from a new NBER working paper.
Who profits from prediction markets?
It seems execution beats foresight:
Retail traders correctly forecast asset price direction yet lose money. Using 222 million prediction market trades with observable terminal payoffs, we decompose returns into a directional component (did the trader pick the right side?) and an execution component (did the trader get a favorable price?). Traders with above-random accuracy earn negative returns because they arrive late and pay unfavorable prices; traders with near-random accuracy profit through superior execution. These two dimensions of skill are nearly orthogonal (ρ ≈ 0.13), and split-sample tests confirm both are persistent. What separates profitable from unprofitable traders is not forecasting ability but execution: automated traders pay 2.52 cents less per contract than casual traders, and this gap alone accounts for the sign of returns across trader types. Being right and making money are not the same thing.
That is from Joshua Della Vedova. Via John de Palma.
Who is a victim?
Moral disagreement across politics revolves around the key question, “Who is a victim?” Twelve studies explain moral conflict with assumptions of vulnerability (AoVs): liberals and conservatives disagree about who is especially vulnerable to victimization, harm, and mistreatment. AoVs predict moral judgments, implicit attitudes, and charitable behavior—and explain the link between ideology and moral judgment (usually better than moral foundations). Four clusters of targets—the Environment, the Othered, the Powerful, and the Divine—explain many political debates, from immigration and policing to religion and racism. In general, liberals see vulnerability as group-based, dividing the moral world into groups of vulnerable victims and invulnerable oppressors. Conservatives downplay group-based differences, seeing vulnerability as more individual and evenly distributed. AoVs can be experimentally manipulated and causally impact moral evaluations. These results support a universal harm-based moral mind (Theory of Dyadic Morality): moral disagreement reflects different understandings of harm, not different foundations.
That is from a recent paper by Jake Womick, Emily Kubin, and Kurt Gray. Via the excellent, non-victimized Kevin Lewis.
Alternatives to 911
Almost a quarter-billion calls are placed to 911 each year in the United States. A large share of them involve social problems, not crimes or emergencies—yet police are dispatched in response. This review traces how the 911 emergency system’s institutional design shapes demand for police, who is excluded from or ill served by this system, and what alternatives exist, including nonemergency lines (with police response), government hotlines (211, 311, 988), civilian crisis teams, and community-based resources. Among the universe of municipal police departments with at least 100 sworn officers in 2020, covering 107 million US residents, police have absorbed broad social service functions, with the availability of formal alternatives restricted to the largest cities. The evidence suggests that the primacy of police reflects institutional reproduction more than public need. I propose priorities for future research.
That is from a new NBER working paper by Bocar A. Ba.
How frequent are price bubbles?
We examine the historical frequency of stock market booms, crashes, and bubbles in the United States from 1792 to 2024 using aggregate market data and industry-level portfolios. We define a bubble as a large boom followed by a crash that reverses the market’s prior gains. Bubbles are extremely rare. We extend the industry-level analysis of Greenwood, Shleifer, and You (2019) through 2024 and replicate their findings out of sample using Cowles Commission industry data from 1871 to 1938. Booms do not reliably predict crashes, but they do predict higher subsequent volatility, increasing the likelihood of both large gains and large losses.
That is from a new NBER working paper by William N. Goetzmann, Otto Manninen, and James Tyler.
The Vietnam War and racial integration
The Vietnam draft conscripted hundreds of thousands of young Americans into an integrated military. I combine near-random draft lottery variation with administrative voter data to study the long-run racial integration effects of coerced national service. Black and Native American veterans became more likely to marry white spouses, identify as Republicans, and live in more-integrated neighborhoods. Improved economic standing may partly mediate these effects. Effects are larger for Southerners and are precisely null for white veterans. Coerced military service generates substantial but asymmetric cross-racial political convergence and racial integration: Vietnam-era service caused about 20 percent of affected cohorts’ interracial marriages.
That is from a recent NBER working paper by Zachary Bleemer.
The value of good high schools
Improving education and labor market outcomes for low-income students is critical for advancing socioeconomic mobility in the United States. We use longitudinal data on five cohorts of 9th grade students to explore how Massachusetts public high schools affect the longer-term outcomes of students, with a special focus on students from low-income families. Using detailed administrative and student survey data, we estimate school value-added impacts on college outcomes and earnings. Observationally similar students who attend a school at the 80th percentile of the value-added distribution instead of a school at the 20th percentile are 11% more likely to enroll in college, are 31% more likely to graduate from a four-year college, and earn 25% (or $10,500) more annually at age 30. On average, schools that improve students’ longer-run outcomes the most are those that improve their 10th grade test scores and increase their college plans the most.
That is from a new NBER working paper by
New results on the economic costs of climate change
I promised you I would be tracking this issue, and so here is a major development. From the QJE by Adrien Bilal and Diego R Känzig::
This paper estimates that the macroeconomic damages from climate change are an order of magnitude larger than previously thought. Exploiting natural global temperature variability, we find that 1○C warming reduces world GDP by over 20% in the long run. Global temperature correlates strongly with extreme climatic events, unlike country-level temperature used in previous work, explaining our larger estimate. We use this evidence to estimate damage functions in a neoclassical growth model. Business-as-usual warming implies a present welfare loss of more than 30%, and a Social Cost of Carbon in excess of $1,200 per ton. These impacts suggest that unilateral decarbonization policy is cost-effective for large countries such as the United States.
Here is an open access version. You may recall that earlier estimates of climate change costs were more like a five to ten percent welfare loss to the world. I do not however find the main results here plausible. The estimation is extremely complicated, and based on the premise that a higher global temperature does more harm to a region than a higher local temperature. And are extreme events a “productivity shock,” or a one-time resource loss that occasions some Solow catch-up? Is the basic modeling consistent with the fact that, while the number of extreme storms may be rising, the number of deaths from those same storms is falling over time? Lives lost are not the same as economic costs, but still the capacity for adjustment seems considerably underrated. What about the effects to date? The authors themselves write: “According to our counterfactual, world GDP per capita would be more than 20% higher today had no warming occurred between 1960 and 2019.” I absolutely do not believe that claim.
In any case, here is your update. To be clear, I do absolutely favor the development of alternative, less polluting energy sources.
One measure of economics GOAT
Who is the greatest economist of all time? This paper provides one potential measure that, along with other considerations, can contribute to debates on who the greatest economist of all time is. We build a novel dataset on the percentage of history of economic thought textbooks dedicated to top economists, using 43 distinct textbooks (1st editions, when available) published between 1901 and 2023. As a percentage of total book pages, Adam Smith has the highest share at 6.69%, beating out Ricardo (5.22%), Mill (3.83%), and Marx (4.36%). Just over 32% of all textbooks allocated most of their pages to Adam Smith, followed by Marx with 18.6%, Mill with 13.95%, and Ricardo with 11.3%. While interesting as a history of economic thought project, such an exercise isn’t merely amusing pedantry; it can provide insight into the types of contributions, research questions, and methodologies that have had the most enduring impact in economics. It may also inform future authors of history of economic textbooks.
That is from a new paper by Gabriel Benzecry and Daniel J. Smith. There is of course also my generative book on this topic at econgoat.ai.
“Tough on crime” is good for young men
Using data from hundreds of closely contested partisan elections from 2010 to 2019 and a vote share regression discontinuity design, we find that narrow election of a Republican prosecutor reduces all-cause mortality rates among young men ages 20 to 29 by 6.6%. This decline is driven predominantly by reductions in firearm-related deaths, including a large reduction in firearm homicide among Black men and a smaller reduction in firearm suicides and accidents primarily among White men. Mechanism analyses indicate that increased prison-based incapactation explains about one third of the effect among Black men and none of the effect among White men. Instead, the primary channel appears to be substantial increases in criminal conviction rates across racial groups and crime types, which then reduce firearm access through legal restrictions on gun ownership for the convicted.
That is from a new paper by Panka Bencsik and Tyler Giles. Via M.
The Macroeconomic Effects of Tariffs
We study the macroeconomic effects of tariff policy using U.S. historical data from 1840–2024. We construct a narrative series of plausibly exogenous tariff changes – based on major legislative actions, multilateral negotiations, and temporary surcharges – and use it as an instrument to identify a structural tariff shock. Tariff increases are contractionary: imports fall sharply, exports decline with a lag, and output and manufacturing activity drop persistently. The shock transmits through both supply and demand channels. Prices rise in the full sample but fall post-World War II, a pattern consistent with changes in the monetary policy response and with stronger international retaliation and reciprocity in the modern trade regime.
That is from a new NBER working paper by
GPT as a Measurement Tool
We present the GABRIEL software package, which uses GPT to quantify attributes in qualitative data (e.g. how “pro innovation” a speech is). GPT is evaluated on classification and attribute rating performance against 1000+ human annotated tasks across a range of topics and data. We find that GPT as a measurement tool is accurate across domains and generally indistinguishable from human evaluators. Our evidence indicates that labeling results do not depend on the exact prompting strategy used, and that GPT is not relying on training data contamination or inferring attributes from other attributes. We showcase the possibilities of GABRIEL by quantifying novel and granular trends in Congressional remarks, social media toxicity, and county-level school curricula. We then apply GABRIEL to study the history of tech adoption, using it to assemble a novel dataset of 37,000 technologies. Our analysis documents a tenfold decline of time lags from invention to adoption over the industrial age, from ~50 years to ~5 years today. We quantify the increasing dominance of companies and the U.S. in innovation, alongside characteristics that explain whether a technology will be adopted slowly or speedily.
That is from a new NBER working paper by .
Brazil facts of the day
Pensions cost the government 10% of GDP. If no reforms are made by 2050, Brazil will spend more on pensions as a share of GDP than many richer and greyer countries… Though Brazil’s share of young people is similar to that in Chile or Mexico, its pension spending is already at Japan’s level. That is despite a modest reform in 2019 that introduced a minimum retirement age. The population is ageing rapidly. Without reform, its social-security deficit, or the shortfall between contributions and payments, is set to rise from 2% of GDP today to over 16% by 2060.
Brazil’s courts cost 1.3% of GDP —the second-most expensive in the world—mostly because of generous pensions. The typical soldier retires before turning 55 on a pension equivalent to their full salary.
Here is more from The Economist. By the way, Brazil cannot change its pension system without amending the constitution.
India AI Data MCP
The Government of India’s Ministry of Statistics and Program Implementation has created an impressive Model Context Protocol (MCP) to connect AI’s to Indian datasets. An AI connected to data via an MCP essentially knows the entire codebook and can make use of the data like an expert. Once connected one can query the data in natural language and quickly create graphs and statistical analysis. I connected Claude to the MCP and created an elegant dashboard with data from India’s Annual Survey of Industries. Check it out.
“You see tech and AI everywhere but in the productivity statistics”
How many times have I heard versions of that claim? Erik Brynjolfsson picks up the telephone in the FT:
While initial reports suggested a year of steady labour expansion in the US, the new figures reveal that total payroll growth was revised downward by approximately 403,000 jobs. Crucially, this downward revision occurred while real GDP remained robust, including a 3.7 per cent growth rate in the fourth quarter. This decoupling — maintaining high output with significantly lower labour input — is the hallmark of productivity growth.
My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.
It is fine to suggest caution in interpreting such statistics, but they hardly push the other way.