Category: Economics
Deep Research considers the costs and benefits of US AID
You can read it here, summary sentence:
Based on the analysis above, the net assessment leans toward the conclusion that USAID’s benefits outweigh its costs on the whole, though with important qualifiers by sector and context.
Here is a useful Michael Kremer (with co-authors) paper. Here are some CRS links. Here is a Samo analysis. AID is a major contributor to the Gavi vaccine program, which is of high value. The gains from AID-supported PEPFAR are very high also.
To be clear, I consider this kind of thing to be scandalous. And I strongly suspect that some of the other outrage anecdotes are true, though they are hard to confirm, or not. How about funds to the BBC? While the “Elonsphere” on Twitter is very much exaggerating the horror anecdotes and the bad news, I do see classic signs of “intermediaries capture” for the agency, a common problem amongst not-for-profit institutions.
The Samo piece is excellent. For one thing he notes: “The agency primarily uses a funding model which pays by hours worked, thus incentivizing long-duration projects.” And the very smart Samantha Power, appointed by Biden to run AID, “…is in favor of disrupting the contractor ecosystem.” Samo also discusses all the restrictions that require American contractors to be involved.
Here is a study on how to reform AID, I have not yet read it.
Ken Opalo, in a very useful and excellent post, writes:
For example, in 2017 about 60% of USAID’s funds went to just 25 American organizations. Only 11% of U.S. aid goes directly to foreign organizations. The rest gets management via U.S. entities or multilateral organizations. This doesn’t mean that the 89% of aid gets skimmed off, just that an inefficiently significant share of the 89% gets gobbled up by overhead costs. In addition, this arrangement denies beneficiaries a chance at policy autonomy.
According to the very smart, non-lunatic Charlie Robertson:
My data suggests US AID flows in 2024 were equivalent to: 93% of Somalia’s government revenues, 61% in Sudan, just over 50% in South Sudan and Yemen
While I do not take cutting off those flows lightly, that seems unsustainable and also wrong to me as a matter of USG policy. Those do not seem like viable enterprises to me.
There are various reports of AID spending billions to help overthrow Assad. I cannot easily assess this matter, either whether the outcomes was good or whether AID mattered, but perhaps (assuming it was effective) such actions should be taken by a different agency or institution?
While US AID appears to pass a cost-benefit test, it does seem ripe for reform. Based on what I have read and heard, I would focus all the more on public health programs, and forget about “trade promotion,” “democracy promotion,” and more. I would get rid of virtually all of the consultants, and make direct transfers to worthy African and Ukraine programs, thus lowering overhead. If such worthy programs exist, why not give them money directly? Are they so hard to find? And if so, how trustworthy are these intermediaries really? What are they intermediating to?
So a housecleaning is needed here, but the important sources of value still should be supported.
US AID bleg
What are the best sources to read on US AID, and its costs and benefits? I am not interested in your anecdotes and adjectives, please offer serious research sources only. Thank you.
The New Consensus on the Minimum Wage
My take is that there is an evolving new consensus on the minimum wage. Namely, the effects of the minimum wage are heterogeneous and take place on more margins than employment. Read Jeffrey Clemens’s brilliant and accessible paper in the JEP for the theory. A good example of the heterogeneous impact is this new paper by Clemens, Gentry and Meer on how the minimum wage makes it more difficult for the disabled to get jobs:
…We find that large minimum wage increases significantly reduce employment and labor force participation for individuals of all working ages with severe disabilities. These declines are accompanied by a downward shift in the wage distribution and an increase in public assistance receipt. By contrast, we find no employment effects for all but young individuals with either non-severe disabilities or no disabilities. Our findings highlight important heterogeneities in minimum wage impacts, raising concerns about labor market policies’ unintended consequences for populations on the margins of the labor force.
Or Neumark and Kayla on the minimum wage and blacks:
We provide a comprehensive analysis of the effects of minimum wages on blacks, and on the relative impacts on blacks vs. whites. We study not only teenagers – the focus of much of the minimum wage-employment literature – but also other low-skill groups. We focus primarily on employment, which has been the prime concern with the minimum wage research literature. We find evidence that job loss effects from higher minimum wages are much more evident for blacks, and in contrast not very detectable for whites, and are often large enough to generate adverse effects on earnings.
Remember also that a “job” is not a simple contract of hours of work for dollars but contains many explicit and implicit margins on work conditions, fringe benefits, possibilities for promotion, training and so forth. For example, in Unintended workplace safety consequences of minimum wages, Liu, Lu, Sun and Zhang finds that the minimum wage increases accidents, probably because at a higher minimum wage the pace of work increases:
we find that large increases in minimum wages have significant adverse effects on workplace safety. Our findings indicate that, on average, a large minimum wage increase results in a 4.6 percent increase in the total case rate.
Note that these effects don’t always happen, in large part because, depending on the scope of the minimum wage increase and the industry, large effects of the minimum wage may be passed on to prices. For example here is Renkin and Siegenthaler finding that higher minimum wage increase grocery prices:
We use high-frequency scanner data and leverage a large number of state-level increases in minimum wages between 2001 and 2012. We find that a 10% minimum wage hike translates into a 0.36% increase in the prices of grocery products. This magnitude is consistent with a full pass-through of cost increases into consumer prices.
Similarly, Ashenfelter and Jurajda find there is no free lunch from minimum wage increases, indeed there is approximately full pass through at McDonalds:
Higher labor costs induced by minimum wage hikes are likely to increase product prices.4 If both labor and product markets are competitive, firms can pass through up to the full increase in costs (Fullerton and Metcalf 2002). With constant returns to scale, firms adjust prices in response to minimum wage hikes in proportion to the cost share of minimum wage labor. Under full price pass-through, the real income increases of low-wage workers brought about by minimum wage hikes may be lower than expected (MaCurdy 2015). There is growing evidence of near full price pass-through of minimum wages in the United States….Based on data spanning 2016–20, we find a 0.2 price elasticity with respect to wage increases driven (instrumented) by minimum wage hikes. Together with the 0.7 (first-stage) elasticity of wage rates with respect to minimum wages, this implies a (reduced-form) price elasticity with respect to minimum wages of about 0.14. This corresponds to near-full price pass-through of minimum-wage-induced higher costs of labor.
You can draw your own conclusions about the desirability of the minimum wage, but the fleeting hope that it raises wages without trade-offs is gone. The effects of the minimum wage are nuanced, heterogeneous, and by no means entirely positive.
Gradual Empowerment?
The subtitle is “Systemic Existential Risks from Incremental AI Development,” and the authors are Jan Kulveit, et.al. Several of you have asked me for comments on this paper. Here is the abstract:
This paper examines the systemic risks posed by incremental advancements in artificial intelligence, developing the concept of `gradual disempowerment’, in contrast to the abrupt takeover scenarios commonly discussed in AI safety. We analyze how even incremental improvements in AI capabilities can undermine human influence over large-scale systems that society depends on, including the economy, culture, and nation-states. As AI increasingly replaces human labor and cognition in these domains, it can weaken both explicit human control mechanisms (like voting and consumer choice) and the implicit alignments with human interests that often arise from societal systems’ reliance on human participation to function. Furthermore, to the extent that these systems incentivise outcomes that do not line up with human preferences, AIs may optimize for those outcomes more aggressively. These effects may be mutually reinforcing across different domains: economic power shapes cultural narratives and political decisions, while cultural shifts alter economic and political behavior. We argue that this dynamic could lead to an effectively irreversible loss of human influence over crucial societal systems, precipitating an existential catastrophe through the permanent disempowerment of humanity. This suggests the need for both technical research and governance approaches that specifically address the risk of incremental erosion of human influence across interconnected societal systems.
This is one of the smarter arguments I have seen, but I am very far from convinced. When were humans ever in control to begin with? (Robin Hanson realized this a few years ago and is still worried about it, as I suppose he should be. There is not exactly a reliable competitive process for cultural evolution — boo hoo!)
Note the argument here is not that a few rich people will own all the AI. Rather, humans seem to lose power altogether. But aren’t people cloning DeepSeek for ridiculously small sums of money? Why won’t our AI future be fairly decentralized, with lots of checks and balances, and plenty of human ownership to boot?
Rather than focusing on “humans in general,” I say look at the marginal individual human being. That individual — forever as far as I can tell — has near-zero bargaining power against a coordinating, cartelized society aligned against him. With or without AI. Yet that hardly ever happens, extreme criminals being one exception. There simply isn’t enough collusion to extract much from the (non-criminal) potentially vulnerable lone individuals.
I do not in this paper see a real argument that a critical mass of the AIs are going to collude against humans. It seems already that “AIs in China” and “AIs in America” are unlikely to collude much with each other. Similarly, “the evil rich people” do not collude with each other all that much either, much less across borders.
I feel if the paper made a serious attempt to model the likelihood of worldwide AI collusion, the results would come out in the opposite direction. So, to my eye, “checks and balances forever” is by far the more likely equilibrium.
Does the Gender Wage Gap Actually Reflect Taste Discrimination Against Women?
One explanation of the gender wage gap is taste discrimination, as in Becker (1957). We test for taste discrimination by constructing a novel measure of misogyny using Google Trends data on searches that include derogatory terms for women. We find—surprisingly, in our view—that misogyny is an economically meaningful and statistically significant predictor of the wage gap. We also test more explicit implications of taste discrimination. The data are inconsistent with the Becker taste discrimination model, based on the tests used in Charles and Guryan (2008). But the data are consistent with the effects of taste discrimination against women in search models (Black, 1995), in which discrimination on the part of even a small group of misogynists can result in a wage gap.
That is a new NBER working paper by Molly Maloney and David Neumark.
Genetic Prediction and Adverse Selection
In 1994 I published Genetic Testing: An Economic and Contractarian Analysis which discussed how genetic testing could undermine insurance markets. I also proposed a solution, genetic insurance, which would in essence insure people for changes in their health and life insurance premiums due to the revelation of genetic data. Later John Cochrane would independently create Time Consistent Health Insurance a generalized form of the same idea that would allow people to have long term health insurance without being tied to a single firm.
The Human Genome Project completed in 2003 but, somewhat surprisingly, insurance markets didn’t break down, even though genetic information became more common. We know from twin studies that genetic heritability is very large but it turned out that the effect from each gene variant is very small. Thus, only a few diseases can be predicted well using single-gene mutations. Since each SNP has only a small effect on disease, to predict how genes influence disease we would need data on hundreds of thousands, even millions of people, and millions of their SNPs across the genome and their diseases. Until recently, that has been cost-prohibitive and as a result the available genetic information lacked much predictive power.
In an impressive new paper, however, Azevedo, Beauchamp and Linnér (ABL) show that data from Genome-Wide Association Studies can be used to create polygenic risk indexes (PGIs) which can predict individual disease risk from the aggregate effects of many genetic variants. The data is prodigious:
We analyze data from the UK Biobank (UKB) (Bycroft et al., 2018; Sudlow et al., 2015). The UKB contains genotypic and rich health-related data for over 500,000 individuals from across the United Kingdom who were between 40 and 69 years old at recruitment (between 2006 and 2010). UKB data is linked to the UK’s National Health Service (NHS), which maintains detailed records of health events across the lifespan and with which 98% of the UK population is registered (Sudlow et al., 2015). In addition, all UKB participants took part in a baseline assessment, in which they provided rich environmental, family history, health, lifestyle, physical, and sociodemographic data, as well as blood, saliva, and urine samples.
The UKB contains genome-wide array data for ∼800,000 genetic variants for ∼488,000 participants.
So for each of these individuals ABL construct risk indexes and they ask how significant is this new information for buying insurance in the Critical Illness Insurance market:
Critical illness insurance (CII) pays out a lump sum in the event that the insured person gets diagnosed with any of the medical conditions listed on the policy (Brackenridge et al., 2006). The lump sum can be used as the policyholder wishes. The policy pays out once and is thereafter terminated.
… Major CII markets include Canada, the United Kingdom, Japan, Australia, India, China, and Germany. It is estimated that 20% of British workers were covered by a CII policy in 2009 (Gatzert and Maegebier, 2015). The global CII market has been valued at over $100 billion in 2021 and was projected to grow to over $350 billion by 2031 (Allied Market Research, 2022).
The answer, as you might have guessed by now, is very significant. Even though current PGIs explain only a fraction of total genetic risk, they are already predictive enough so that it would make sense for individuals with high measured risk to purchase insurance, while those with low-risk would opt out—leading to adverse selection that threatens the financial sustainability of the insurance market.
Today, the 500,000 people in the UK’s Biobank don’t know their PGIs but in principle they could and in the future they will. Indeed, as GWAS sample sizes increase, PGI betas will become more accurate and they will be applied to a greater fraction of an individual’s genome so individual PGIs will become increasingly predictive, exacerbating selection problems in insurance markets.
If my paper was a distant early warning, Azevedo, Beauchamp, and Linnér provide an early—and urgent—warning. Without reform, insurance markets risk unraveling. The authors explore potential solutions, including genetic insurance, community rating, subsidies, and risk adjustment. However, the effectiveness of these measures remains uncertain, and knee-jerk policies, such as banning insurers from using genetic information, could lead to the collapse of insurance altogether.
Sundry observations on the Trump tariffs
Brad Setser estimates the costs at 0.8 percent of U.S: gdp. I am not sure if he is considering exchange rate adjustments in that figure.
Kevin Bryan writes:
The problem with escalating, again, is that Canada is more reliant on US energy than vice versa, US ports than vice versa, US intermediate goods than vice versa, and DT is basically a narcissist. Again: no normal negotiation here, as the tariffs itself have no logical basis! 4/x
The fentanyl excuse seems like a flimsy (and should be illegal) one to let the exec branch set a tariff rate that constitutionally is Congress’ job. But maybe there is some “give Trump a fake win and de-escalate”. I worry about what that does in the future, though. 5/x
Ben Golub notes:
Modern supply chains don’t look like trade theory 101! They involve constant border crossings, each now hit by tariffs. Tariffs raise prices, but the more important thing they do is disrupt supply relationships.
So when a shock hits, you don’t just have a bit less activity by a few of the least profitable firms. You suddenly knock out some of the relationships (contracts) and some of the nodes (companies) in a large and very complex network. This can be pretty disruptive!
Here is Noah’s post. Here is the Yale Budget Lab on likely price effects in America.
Here is an Alan Beattie FT piece on how tariffs often matter less than you think. The size of the costs here can be disputed, but the most relevant fact is that there simply isn’t any upside to the Trump tariff policy. If you think it is about fentanyl, I have a prediction: the price of fentynal will not be rising anytime soon across the window of a one-year moving average. Here are some additional relevant points about fentanyl, which from Canada is not a major problem.
Letting China into the WTO was not the key decision
We study China’s export growth to the United States from 1950–2008, using a structural model to disentangle the effects of past tariff changes from the effects of changes in expectations of future tariffs. We find that the effects of China’s 1980 Normal Trade Relations (NTR) grant lasted past its 2001 accession to the World Trade Organization (WTO), and the likelihood of losing NTR status decreased significantly during 1986–92 but changed little thereafter. US manufacturing employment trends support our findings: industries more exposed to the 1980 reform have shed workers steadily since then without acceleration around China’s WTO accession.
That is from a new and forthcoming JPE article by George Alessandria, Shafaat Yar KhanArmen KhederlarianKim J. RuhlJoseph B. Steinberg.
U.S. Infrastructure: 1929-2023
By Ray C. Fair, an important contribution:
This paper examines the history of U.S. infrastructure since 1929 and in the process reports an interesting fact about the U.S. economy.Infrastructure stock as a percent of GDP began a steady decline around 1970, and thegovernment budget deficit became positive and large at roughly the same time. The infrastructure pattern in other countries does not mirror that in the United States, so the United States appears to be a special case. The overallresults suggest that the United States became less future oriented beginning around 1970, an increase in the social discount rate. This change has persisted. This is the interestingfact. The paper contains speculation on possible causes.
Here is the link. Via the excellent Kevin Lewis.
The new tariffs are bad
Or are they tariff threats instead? Still bad! From the FT:
Donald Trump has said he will hit the EU with tariffs, adding the bloc to a list of targets including Canada and Mexico and bringing the US to the brink of new trade wars with its biggest trading partners.
The US president acknowledged that the new tariffs could cause some market “disruption”, but claimed they would help the country close its trade deficits.
“The tariffs are going to make us very rich, and very strong,” Trump told reporters in the Oval Office.
Hours before his plan for tariffs of 25 per cent on Canada and Mexico was due to take effect on February 1, Trump also widened his threat to include the EU, which he said had treated the US “very badly”.
There is not any good argument for doing this. The simplest hypothesis here is that Trump has mistaken views on trade economics, and is raising tariffs for the same reason that I, if I were President, would be trying to cut them.
“It’s not a negotiating tool,” Trump said. “It’s pure economic. We have big deficits with, as you know, with all three of them.”
Of course this is a sign that further bad things will happen. Let us hope that the courts can strike these down…
FDA Deregulation of E-Cigarettes Saved Lives and Spurred Innovation
What would happen to drug development if the FDA lost its authority to prohibit new drugs? Would research and development boom and lives be saved? Or would R&D decline and lives be lost to a flood of unsafe and ineffective drugs? Or perhaps R&D would decline as demand for new drugs faltered due to public hesitation in the absence of FDA approval? In an excellent new paper Pesko and Saenz examine one natural experiment: e-cigarettes.
The FDA banned e-cigarettes as unapproved drugs soon after their introduction in the United States. The FDA had previously banned other nicotine infused products. Thus, it was surprising when in 2010 the FDA was prohibited from regulating e-cigarettes as a drug/device when a court ruled that Congress had intended for e-cigarettes to be regulated as a tobacco product not as a drug.
As of 2010, therefore, e-cigarettes were not FDA regulated:
…e–cigarette companies were able to bypass the lengthy and costly drug approval process entirely. Additionally, without FDA drug regulation, e–cigarette companies could also freely enter the market, modify products without approval, and bypass extensive post–market reporting requirements and quality control standards.
Indeed, it wasn’t until 2016 that the FDA formally “deemed” e-cigarettes as tobacco products (deemed since they don’t actually contain tobacco) and approvals under the less stringent tobacco regulations were not required until 2020. For nearly a decade, therefore, e-cigarettes were almost entirely unregulated and then lightly regulated under the tobacco framework. So, what happened during this period?
Pesko and Saenz show that FDA deregulation led to a boom in e-cigarette research and development which improved e-cigarettes and led to many lives saved as people switched from smoking to vaping.
The boom in research and development is evidenced by a very large increase in US e-cigarette patents. We do not see a similar increase in Australia (where e-cigarettes were not deregulated) nor do we see an increase in non e-cigarette smoking cessation products (figure 1a of their paper not shown here).
Estimating the decline in smoking and smoking-attributable mortality (SAM) is more difficult but the authors assemble a large collection of data broken down by demographics and they estimate that prohibiting the FDA from regulating e-cigarettes reduced smoking attributable mortality by nearly 10% on average each year from 2011-2019 for a total savings of some 677,000 life-years.
The authors pointedly compare what happened under deregulation of e-cigarettes–innovation and lives saved–with what happened to similar smoking cessation products that remained under FDA regulation–stagnation and no reduction in smoking attributable mortality.
A key takeaway on the slowness of FDA drug regulation is that it took 9 years before nicotine gum could be sold with a higher nicotine strength, 12 years before it could be sold OTC, and 15 years before it could be sold with a flavor. Further, a recent editorial laments that there has been largely non–existent innovation in FDA–approved smoking cessation drugs since 2006 (Benowitz et al., 2023). In particular, the “world’s oldest smoking cessation aid” cyctisine, first brought to market in 1964 in Bulgaria (Prochaska et al., 2013), and with quit success rates exceeding single forms of nicotine replacement therapy (NRT) (Lindson et al., 2023), is not approved as a drug in the United States.
The authors conclude, “this situation raises concern that drugs may be over–regulated in the United States…”. Quite so.
Addendum: A quick review on the FDA literature. In addition to classic works by Peltzman on the 1962 Amendments and by myself on what we can learn about the FDA from off-label pricing we have a spate of recent new papers including Parker Rogers, which I covered earlier:
In an important and impressive new paper, Parker Rogers looks at what happens when the FDA deregulates or “down-classifies” a medical device type from a more stringent to a less stringent category. He finds that deregulated device types show increases in entry, innovation, as measured by patents and patent quality, and decreases in prices. Safety is either negligibly affected or, in the case of products that come under potential litigation, increased.
and Isakov, Lo and Montazerhodjat which finds that FDA statistical standards tend to be too conservative, especially for drugs meant to treat deadly diseases (see my comments on their paper and more links in Is the FDA Too Conservative or Too Aggressive?)
See also FDA commentary, for much more from sunscreens to lab developed tests.
Keynes on the Soviet Union
I had not known of this passage, which I am packaging with its introduction from Gavan Tredoux:
John Maynard Keynes has the undeserved reputation of a critic of the USSR. Few know that he reviewed Sidney and Beatrice Webb’s mendacious tome The Soviet Union: a New Civilization (1935/1937/1943) fawningly. Perhaps the most embarrassing thing Keynes ever wrote. From his Complete Works 28:
“One book there is … which every serious citizen will do well to look into—the extensive description of Soviet Communism by Mr and Mrs Sidney Webb. It is on much too large a scale to be called a popular book, but the reader should have no difficulty in comprehending the picture it conveys. Until recently events in Russia were moving too fast and the gap between paper professions and actual achievements was too wide for a proper account to be possible . But the new system is now sufficiently crystallised to be reviewed. The result is impressive. The Russian innovators have passed, not only from the revolutionary stage, but also from the doctrinaire stage. There is little or nothing left which bears any special relation to Marx and Marxism as distinguished from other systems of socialism. They are engaged in the vast administrative task of making a completely new set of social and economic institutions work smoothly and successfully over a territory so extensive that it covers one sixth of the land surface of the world. Methods are still changing rapidly in response to experience. The largest scale empiricism and experimentalism which has ever been attempted by disinterested administrators is in operation. Meanwhile the Webbs have enabled us to see the direction in which things appear to be moving and how far they have got. It is an enthralling work, because it contains a mass of extraordinarily important and interesting information concerning the evolution of the contemporary world. It leaves me with a strong desire and hope that we in this country may discover how to combine an unlimited readiness to experiment with changes in political and economic methods and institutions, whilst preserving traditionalism and a sort of careful conservatism, thrifty of everything which has human experience behind it, in every branch of feeling and of action.”
So no, sorry, Keynes cannot be GOAT.
It’s Time to Build the Peptidome!
Antimicrobial resistance is a growing problem. Peptides, short sequences of amino acids, are nature’s first defense against bacteria. Research on antimicrobial peptides is promising but such research could be much more productive if combined with machine learning on big data. But collecting, collating and organizing big data is a public good and underprovided. Current peptide databases are small, inconsistent, incompatible with one another and they are biased against negative controls. Thus, there is scope for a million-peptide database modelled on something like Human Genome Project or ProteinDB:
ML needs data. Google’s AlphaGo trained on 30 million moves from human games and orders of magnitude more from games it played against itself. The largest language models are trained on at least 60 terabytes of text. AlphaFold was trained on just over 100,000 3D protein structures from the Protein Data Bank.
The data available for antimicrobial peptides is nowhere near these benchmarks. Some databases contain a few thousand peptides each, but they are scattered, unstandardized, incomplete, and often duplicative. Data on a few thousand peptide sequences and a scattershot view of their biological properties are simply not sufficient to get accurate ML predictions for a system as complex as protein-chemical reactions. For example, the APD3 database is small, with just under 4,000 sequences, but it is among the most tightly curated and detailed. However, most of the sequences available are from frogs or amphibians due to path-dependent discovery of peptides in that taxon. Another database, CAMPR4, has on the order of 20,000 sequences, but around half are “predicted” or synthetic peptides that may not have experimental validation, and contain less info about source and activity. The formatting of each of these sources is different, so it’s not easy to put all the sequences into one model. More inconsistencies and idiosyncrasies stack up for the dozens of other datasets available.
There is even less negative training data; that is, data on all the amino-acid sequences without interesting publishable properties. In current ML research, labs will test dozens or even hundreds of peptide sequences for activity against certain pathogens, but they usually only publish and upload the sequences that worked.
…The data problem facing peptide research is solvable with targeted investments in data infrastructure. We can make a million-peptide database
There are no significant scientific barriers to generating a 1,000x or 10,000x larger peptide dataset. Several high-throughput testing methods have been successfully demonstrated, with some screening as many as 800,000 peptide sequences and nearly doubling the number of unique antimicrobial peptides reported in publicly available databases. These methods will need to be scaled up, not only by testing more peptides, but also by testing them against different bacteria, checking for human toxicity, and testing other chemical properties, but scaling is an infrastructure problem, not a scientific one.
This strategy of targeted data infrastructure investments has three successful precedents: PubChem, the Human Genome Project, and ProteinDB.
Much more in this excellent piece of science and economics from IFP and Max Tabarrok.
Is it a problem if Wall Street buys up homes?
No, as I argue in my latest Bloomberg column. This one is basic economics:
The simpler point is this: If large financial firms can buy your home, you are better off. You will have more money to retire on, and presumably selling your home will be easier and quicker, removing what for many homeowners is a major source of stress.
And all of this makes it easier to buy a home in the first place, knowing you will have a straightforward set of exit options. You don’t have to worry about whether your buyer can get a mortgage. Homeowners tend to be forward-looking, and a home’s value as an investment is typically a major consideration in a purchase decision.
And:
When financial firms buy homes, they also tend to renovate and invest in fixing the places up.
A less obvious point is that lower-income groups can benefit when financial firms buy up homes. Obviously, if a hedge fund buys your home, no one at the fund is intending to live there; they probably plan to rent it out. The evidence shows that when institutional investors purchase housing, it leads to more rental inventory and lower rents.
If the tradeoff is higher prices to buy a home but lower prices to rent one, that will tend to favor lower-income groups. Think of it as a form of housing aid that does not cost the federal government anything. Economist Raj Chetty, in a series of now-famous papers with co-authors, has stressed the ability to move into a better neighborhood as a fundamental determinant of upward economic mobility. Lower rents can enable those improvements.
The article also show that the extent of financial firms buying homes is smaller than many people seem to believe.
Will transformative AI raise interest rates?
We want to know if AGI is coming. Chow, Halperin, and Mazlish have a paper called “Transformative AI, Existential Risk, and Real Interest Rates” arguing that, if we believe the markets, it is not coming for some time. The reasoning is simple. If we expect to consume much more in the future, and people engage in smoothing their incomes over time, then people will want to borrow more now. The real interest rate would rise. The reasoning also works if AI is unaligned, and has a chance of destroying all of us. People would want to spend what they have now. They would be disinclined to save, and real interest rates would have to rise in order to induce people to lend.
The trouble is that “economic growth” is not really one thing. It consists both of expanding our quantity of units consumed for a given amount of resources, but also in expanding what we are capable of consuming at all. Take the television – it has simultaneously become cheaper and greatly improved in quality. One can easily imagine a world in which the goods stay the same price, but greatly improve in quality. Thus, the marginal utility gained from one dollar increases in the future, and we would want to save more, not less. The coming of AGI could be heralded by falling interest rates and high levels of saving.