Results for “concentration”
203 found

The rant against Amazon and the rant for Amazon

Wow! It’s unbelievable how hard you are working to deny that monopsony and monopoly type market concentration is causing all all these issues. Do you think it’s easy to compete with Amazon? Think about all the industries amazon just thought about entering and what that did to the share price of incumbents. Do you think Amazon doesn’t use its market clout and brand name to pay people less? Don’t the use the same to extract incentives from politicians? Corporate profits are at record highs as a percent of the economy, how is that maintained? What is your motivation for closing your eyes and denying consolidation? It doesn’t seem that you are being logical.

That is from a Steven Wolf, from the comments.  You might levy some justified complaints about Amazon, but this passage packs a remarkable number of fallacies into a very small space.

First, monopsony and monopoly tend to have contrasting or opposite effects.  To the extent Amazon is a monopsony, that leads to higher output and lower prices.

Second, if Amazon is knocking out incumbents that may very well be good for consumers.  Consumers want to see companies that are hard for others to compete with.  Otherwise, they are just getting more of the same.

Third, if you consider markets product line by product line, there are very few sectors where Amazon would appear to have much market power, or a very large share of the overall market for that good or service.

Fourth, Amazon is relatively strong in the book market.  Yet if a book is $28 in a regular store, you probably can buy it for $17 on Amazon, or for cheaper yet used, through Amazon.

Fifth, Amazon takes market share from many incumbents (nationwide) but it does not in general “knock out” the labor market infrastructure in most regions.  That means Amazon hire labor by paying it more or otherwise offering better working conditions, however much you might wish to complain about them.

Sixth, if you adjust for the nature of intangible capital, and the difference between economic and accounting profit, it is not clear corporate profits have been so remarkably high as of late.

Seventh, if Amazon “extracts” lower taxes and an improved Metro system from the DC area, in return for coming here, that is a net Pareto improvement or in any case at least not obviously objectionable.

Eighth, I did not see the word “ecosystem” in that comment, but Amazon has done a good deal to improve logistics and also cloud computing, to the benefit of many other producers and ultimately consumers.  Book authors will just have to live with the new world Amazon has created for them.

And then there is Rana Foroohar:

“If Amazon can see your bank data and assets, [what is to stop them from] selling you a loan at the maximum price they know you are able to pay?” Professor Omarova asks.

How about the fact that you are able to borrow the money somewhere else?

Addendum: A more interesting criticism of Amazon, which you hardly ever hear, is the notion that they are sufficiently dominant in cloud computing that a collapse/sabotage of their presence in that market could be a national security issue.  Still, it is not clear what other arrangement could be safer.

Monday assorted links

1. How not to scout for soccer talent.

2. NIMBY isn’t just CA: “Philly’s 43,000 vacant lots face a fresh political battle.

3. The new economics of Chinatowns: “Chinese restaurant jobs tend not to be in places with a high concentration of Chinese immigrants, but rather in places with a high proportion of non-Hispanic whites. In addition, the farther the jobs are from New York City, the higher the salary.”

4. MIE: Board game about alpacas.

5. IQ predicts how you vote in Denmark (multi-party system) but not America.  But note this: “In both countries, higher ability predicts left-wing social and right-wing economic views.”

6. Why Sacha Baron Cohen won’t be funny any more.

The Ex-Post Dead Are Not Ex-Ante Hopeless

It’s well known that a large faction of medical spending occurs in the last 12 months of life but does this mean that the money spent was fruitless? Be careful as there is a big selection effect–we don’t see the people we spent money on who didn’t die. A new paper in Science by Einav, Finkelstein, Mullainathan and Obermeyer finds that most spending is not on people who are predicted to die within the next 12 months.

That one-quarter of Medicare spending in the United States occurs in the last year of life is commonly interpreted as waste. But this interpretation presumes knowledge of who will die and when. Here we analyze how spending is distributed by predicted mortality, based on a machine-learning model of annual mortality risk built using Medicare claims. Death is highly unpredictable. Less than 5% of spending is accounted for by individuals with predicted mortality above 50%. The simple fact that we spend more on the sick—both on those who recover and those who die—accounts for 30 to 50% of the concentration of spending on the dead. Our results suggest that spending on the ex post dead does not necessarily mean that we spend on the ex ante “hopeless.

…”Even if we zoom in further on the subsample of individuals who enter the hospital with metastatic cancer…we find that only 12% of decedents have an annual predicted mortality of more than 80%.

Thus, we aren’t spending on people for whom there is no hope but it doesn’t follow that it’s the spending that creates the hope. What we really want to know is who will live or die conditional on the spending. And to that issue this paper does not speak.

Anthony Downs on race and urbanism, that was then this is now

It always surprises me that the name of Anthony Downs is not mentioned more often in conjunction with the Nobel Prize in economics.  His An Economic Theory of Democracy is one of the best and most important books on public choice economics, and it is the major source for the median voter theorem. Yet now a new paperback copy of the book is not to be had for less than $100.  Downs also had major contributions to transportation economics (traffic expands to fill capacity) and housing and urban economics and the theory of bureaucracy.

Yesterday I learned that Downs was a major White House consultant on race and urban affairs in 1967, working with James Tobin and Kermit Gordon and other luminaries on the National Commission on Urban Problems.  What they produced fed into what was described as “The Most Courageous Government Report in the Last Decade,” namely the Kerner Commission report.  Here are some details:

1. Downs did much of the work of the commission and much of the actual writing, including of the Kerner Report, including the section on housing policy and the ghetto.

2. He was very concerned with “white flight” and thought a more radical approach to urban poverty was needed.  He thought Great Society programs had not been tried on a large enough scale.

3. In the view of Downs, major progress already had been made, but he worried that aspirations were rising faster than living standards.

4. He spelt out a “status quo approach,” a “ghetto-improvement strategy,” and a “dispersal strategy” based on integration.  He considered the latter the most ambitious and perhaps the most unikely.  He focused on outlining these alternatives, and their benefits and costs, rather than recommending any one of them.

5. Among the specific proposals considered were a Neighborhood Youth Corps, increasing the minimum wage, job training, public service programs, and a federally enforced fair employment-practices bill.  The draft also encouraged policymakers to think about educational vouchers, decentralizing urban school systems, and educational innovation.  There were arguments as to whether teachers’ unions should be held at fault and weakened.

It is striking how little these debates have progressed since more than fifty years ago.

p.s. Many on the right were critical of the report.

This is all from Steven M. Gillon, Separate and Unequal: The Kerner Commission and the Unraveling of American Liberalism.

Who Benefits from Targeted Property Tax Relief?

If you want to lower the price of housing and still house lots of people there is really only one way: build more housing. Yet politicians and voters continually seek to repeal the laws of supply and demand. A case in point, many states reduce property tax rates for seniors, veterans or the disabled or combinations thereof. Great for seniors, veterans and the disabled, right? Wrong. If supply doesn’t increase, lowering property taxes simply increases the price of housing.

If the property tax relief is targeted to a very small group then demand won’t increase much and the benefits will accrue to the targeted group but seniors and veterans are both a significant fraction of the population and an even more significant fraction of homeowners. Thus, we might expect that a significant fraction of the tax relief will be capitalized into housing prices–that’s exactly what Moulton, Waller and Wentland find in a new paper:

While property tax relief measures are often intended to aid specific groups, basic supply and demand analysis predicts that an unintended consequence of this particular kind of tax relief is that, on the margin, it increases demand for homeownership among its expected beneficiaries. Accordingly, we examine two property tax relief measures in Virginia that applied to disabled veterans and the elderly, finding that these policy changes had an immediate effect on home prices after the
voters approved them on Election Day. Overall, we find that home prices rose by approximately 5 percent in response to the increase in demand for homeownership. Indeed, the tax relief policies provide a unique, quasi-experimental methodological
setting where the treatment is exogenously assigned to specific groups within this market. We find that the effect was as much as an 8.1 percent price appreciation for homes in areas with high concentrations of veterans, 7.3 percent in areas with
more seniors, and 7.4 percent for senior preferred homes in all areas. The effect was highest, 9.3 percent, in areas with high concentrations of seniors and veterans, which translates to about $18,900, or roughly full capitalization, for the average
home. Conversely, the tax relief measures had little if any effect on homes in areas with fewer potential beneficiaries….

A cynic might argue that the true intent of the policy is to raise housing prices but this gives politicians and voters too much credit. The intent is sincere, it’s the means that are false.

The distribution of cities, then and now

In today’s developed countries, cities are thus scattered across historically important agricultural areas; as a result, there is a relatively higher degree of spatial equality in the distribution of resources within these countries.  By contrast, in today’s developing countries, cities are concentrated more on the coast where transport conditions, compared to agricultural suitability, are more favorable.

That is from Henderson, Squires, Storeygard, and Weil in the January 2018 QJE, based on light data measured by satellites.  Overall, I view this regularity as a negative for the prospects for liberalism and democracy in emerging economies, as urban concentration can encourage too much rent-seeking and kleptocracy.  It also reflects the truly amazing wisdom of (some of) our Founding Fathers, who saw a connection between liberty and decentralized agrarianism.  It suggests a certain degree of pessimism about China’s One Belt, One Road initiative.  The development of the hinterland in the United States may not be a pattern that today’s emerging economies necessarily should or could be seeking to replicate.  Which makes urban economics and Henry George all the more important.

Is legal marijuana more potent?

Yes, here is Keith Humphreys from Wonkblog:

Although some people believe prohibiting drugs is what makes their potency increase, the potency of marijuana under legalization has disproved that idea. Potency rises in both legal and illegal markets for the simple reason that it conveys advantages to sellers. More potent drugs have more potential to addict customers, thereby turning them into reliable profit centers.

In other legal drug markets, regulators constrain potency. Legal alcohol beverage concentrations are regulated in a variety of ways, including through different levels of tax for products of different strengths as well as constraints on labeling and place of sale. In most states, for a beverage to be marketed and sold as “beer,” its alcohol content must fall within a specified range. Similarly, if wine is distilled to the point that its alcohol content rises too high, some states require it be sold as spirits (i.e., as “brandy”) and limit its sale locations.

As states have legalized marijuana, they have put no comparable potency restrictions in place, for example capping THC content or levying higher taxes on more potent marijuana strains. Sellers are doing the economic rational thing in response: ramping up potency.

How about the Netherlands?:

The study was conducted in the Netherlands, where marijuana is legally available through “coffee shops.” The researchers examined the level of delta-9-tetrahydrocannabinol (THC), the main intoxicant in marijuana, over a 16-year period. Marijuana potency more than doubled from 8.6 percent in 2000 to 20.3 percent in 2004, which was followed by a surge in the number of people seeking treatment for marijuana-related problems. When potency declined to 15.3 percent THC, marijuana treatment admissions fell thereafter. The researchers estimated that for every 3 percent increase in THC, roughly one more person per 100,000 in the population would seek marijuana use disorder treatment for the first time.

The Dutch findings are relevant to the United States because high THC marijuana products have proliferated in the wake of legalization. The average potency of legal marijuana products sold in the state of Washington, for example, is 20 percent THC, with some products being significantly higher.

I believe that marijuana legalization has moved rather rapidly into being an overrated idea.  To be clear, it is still an idea I favor.  It seems to me wrong and immoral to put people in jail for ingesting substances into their body, or for aiding others in doing so, at least provided fraud is absent in the transaction.  That said, IQ is so often what is truly scarce in society.  And voluntary consumption decisions that lower IQ are not something we should be regarding with equanimity.  Ideally I would like to see government discourage marijuana consumption by using the non-coercive tools at its disposal, for instance by making it harder for marijuana to have a prominent presence in the public sphere, or by discouraging more potent forms of the drug.  How about higher taxes and less public availability for more potent forms of pot, just as in many states beer and stronger forms of alcohol are not always treated equally under the law?

Why has investment been weak?

Germán Gutiérrez and Thomas Philippon have a new paper on this topic:

We analyze private fixed investment in the U.S. over the past 30 years. We show that investment is weak relative to measures of profitability and valuation — particularly Tobin’s Q, and that this weakness starts in the early 2000’s. There are two broad categories of explanations: theories that predict low investment along with low Q, and theories that predict low investment despite high Q. We argue that the data does not support the first category, and we focus on the second one. We use industry-level and firm-level data to test whether under-investment relative to Q is driven by (i) financial frictions, (ii) changes in the nature and/or localization of investment (due to the rise of intangibles, globalization, etc), (iii) decreased competition (due to technology, regulation or common ownership), or (iv) tightened governance and/or increased short-termism. We do not find support for theories based on risk premia, financial constraints, safe asset scarcity, or regulation. We find some support for globalization; and strong support for the intangibles, competition and short-termism/governance hypotheses. We estimate that the rise of intangibles explains 25-35% of the drop in investment; while Concentration and Governance explain the rest. Industries with more concentration and more common ownership invest less, even after controlling for current market conditions and intangibles. Within each industryyear, the investment gap is driven by firms owned by quasi-indexers and located in industries with more concentration and more common ownership. These firms return a disproportionate amount of free cash flows to shareholders. Lastly, we show that standard growth-accounting decompositions may not be able to identify the rise in markups.

On the road I have yet to read it, but it looks like one of the most important papers of the year.

Were U.S. nuclear tests more harmful than we had thought?

So says Keith A. Meyers, job candidate from University of Arizona.  I found this to be a startling result, taken from his secondary paper:

During the Cold War the United States detonated hundreds of atomic weapons at the Nevada Test Site. Many of these nuclear tests were conducted above ground and released tremendous amounts of radioactive pollution into the environment. This paper combines a novel dataset measuring annual county level fallout patterns for the continental U.S. with vital statistics records. I find that fallout from nuclear testing led to persistent and substantial increases in overall mortality for large portions of the country. The cumulative number of excess deaths attributable to these tests is comparable to the bombings of Hiroshima and Nagasaki.

Basically he combines mortality estimates with measures of Iodine-131 concentrations in locally produced milk, “to provide a more precise estimate of human exposure to fallout than previous studies.” The most significant effects are in the Great Plains and Central Northwest of America, and “Back-of-the-envelope estimates suggest that fallout from nuclear testing contributed between 340,000 to 460,000 excess deaths from 1951 to 1973.”

His primary job market paper is on damage to agriculture from nuclear testing.

Is there a glass ceiling for female artists?

Using a unique data set consisting of the population of fine art auctions from 2000 to 2017 for Western artists, we provide strong empirical evidence for a glass ceiling for female artists. First, we show that female artists are less likely to transition from the primary (gallery) into the secondary (auction) market. This glass ceiling results in a selection mechanism which is manifested in an average premium of 6% for artworks by female artists. Second, this premium is driven by a small number of women located at the top of the market and turns into a discount when we account for the number of artworks sold. The superstar effect, where a small number of individuals absorbs the majority of industry revenues, is amplified for the group of female artists. Third, at the top 0.1% of the market artworks by female artists are traded at a discount of 9%. Moreover, the very top 0.03% of the market, where 41% of the revenues are concentrated, are still entirely off limits for women. Overall, we find two glass ceilings for women pursuing an artistic career. While the first one is located at the starting point of a female artist’s career, the second one can be found at the transition into the superstar league of the market and remains yet impermeable. Our study has wide-reaching implications for industries characterized by a superstar effect and a strong concentration of men relative to women.

That is the abstract of a new paper by Fabian Y.R.P. Bocart, Marina Gertsberg, and Rachel A. J. Pownal, via the excellent Kevin Lewis.

Recently I’ve been enjoying @womensart1, a good way to see interesting artworks that otherwise don’t get so much attention.  And here is my older essay “Why Women Succeed, and Fail, in the Arts.”

Hacking the Nazis

Some resisters fought the Nazis in the streets while others fought them from within by hacking some of the world’s first information technology systems. Ava Ex Machina has a fascinating post discussing some of these unheralded hackers. Here is one:

René Carmille — was a punch card computer expert and comptroller general of the French Army, who later would head up the Demographics Department of the French National Statistics Service. As quickly as IBM worked with the Nazis to enable them to use their punch card computer systems to update census data to find and round up Jewish citizens, Rene and his team of double-agents worked just as fast to manipulate their data to undermine their efforts.

The IEEE newspaper, The Institute, describes Carmille as being an early ethical hacker: “Over the course of two years, Carmille and his group purposely delayed the process by mishandling the punch cards. He also hacked his own machines, reprogramming them so that they’d never punch information from Column 11 [which indicated religion] onto any census card.” His work to identify and build in this exploit saved thousands of Jews from being rounded up and deported to death camps.

Rene was arrested in Lyon in 1944. He was interrogated for two days by Klaus Barbie, a cruel and brutal SS and Gestapo officer called “the Butcher of Lyon,” but he still did not break under torture. Rene was caught by the Nazis and sent to the Dachau concentration camp, where he died in 1945.

Hat tip: Tim Harford.

What is the incidence of a tax on tuition waivers?

Here is some basic info, in 2011-2012 145,000 graduate students received tuition waivers.  Monday I suggested such a tax is a bad idea, but who would bear the burden?  Let’s say there are three parties, the universities, the graduate students, and third-party funders who support research and graduate students.  Those third parties may be for instance Harvard donors or the National Science Foundation.

The short-run, first-order effect is that the grad students pay tax on their waivers and fewer of them pursue postgraduate studies.  And if grad students are dead set on attending no matter what, they bear a relatively high burden of the tax.

That said, there is more to the story.  Universities seek to attract graduate students for multiple reasons, with two possible options being “to enhance their prestige” or “to boost revenue,” or some mix of the two.  It will matter.

To make up for (some of) the tax, and maintain the flow of students, universities will opt for some mix of lowering their tuition and increasing stipends and increasing non-taxed forms of aid, such as quality of office space or teaching opportunities for grad students.  If universities seek to boost their prestige, they will be quite keen to keep up their “Q,” and not eager to lower Q, even with higher P as recompense on the revenue side.  In that case a relatively high share of the burden will fall on universities.

In contrast, if universities pursue revenue, they are more willing to live with a lower Q if accompanied by a higher P.  More of the burden will fall on students, because the accompanying enrollment-maintaining compensations from the universities will be accordingly lower.

I don’t know of a paper estimating the effects of taxing student fellowships, an innovation from the Reagan tax reforms of 1986.  Can any of you lend a hand here?  It didn’t seem to much slow the growth of graduate education as far as I can tell, so perhaps the burden there was born by universities.

Now enter the third parties.  Donors might give more funds to universities to help make up for taxed tuition waivers.  If you are a Harvard alum, for instance, you might wish to see Harvard carry on its great traditions with yet another generation of Ph.d economists who initially received tuition waivers.  In other words, you want prestige as an alum and that requires keeping up the flow of Q, number of quality students, through the program.  Donors will give more resources to the universities, or to the students (through other vehicles), to help make up for the new tax.  In words, to the extent the donors covet prestige, more of the tax will fall on them.  This is a tax on prestige-seeking!

My intuition is that the schools with a strong donor base will put in much more effort to raise money for graduate students, and they will meet with a fair degree of success.  (Note that Harvard’s now-bigger fundraising campaign will to some extent distract the attention of the president and other senior leaders from other programmatic activities at Harvard; in the longer run that could harm Harvard stakeholders.)  But schools below the top tier don’t so much have this option, so they will decline in resources and status relative to the very top schools.  This is p classic case of how imposing new burdens leads to higher market concentration and cementing in the status of the elites, in this case the educational elites.

Throughout, I am assuming the universities cannot evade the tax outright, for instance by relabeling the categories of tuition and tuition waiver to avoid the bite altogether.  But that is another possible equilibrium, if the details of the law so allow.

Is Piketty’s Data Reliable?

When Thomas Piketty’s Capital in the Twenty-First Century first appeared many economists demurred on the theory but heaped praise on the empirical work. “Even if none of Piketty’s theories stands up,” Larry Summers argued, his “deeply grounded” and “painstaking empirical research” was “a Nobel Prize-worthy contribution”.

Theory is easier to evaluate than empirical work, however, and Phillip Magness and Robert Murphy were among the few authors to actually take a close look at Piketty’s data and they came to a different conclusion:

We find evidence of pervasive errors of historical fact, opaque methodological choices, and the cherry-picking of sources to construct favorable patterns from ambiguous data.

Magness and Murphy, however, could be dismissed as economic history outsiders with an ax to grind. Moreover, their paper was published in an obscure libertarian-oriented journal. (Chris Giles and Ferdinando Giugliano writing in the FT also pointed to errors but they could be dismissed as journalists.) The Magness and Murphy conclusions, however, have now been verified (and then some) by a respected figure in economic history, Richard Sutch.

I have never read an abstract quite like the one to Sutch’s paper, The One-Percent across Two Centuries: A Replication of Thomas Piketty’s Data on the Distribution of Wealth for the United States (earlier wp version):

This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in
the Twenty-First Century….Here I examine Piketty’s US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable.
The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty’s data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based
on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more
solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty’s integrity.

You know it’s bad when a disclaimer like that is necessary. In the body, Sutch is even stronger. He concludes:

Very little of value can be salvaged from Piketty’s treatment of data from the nineteenth century. The user is provided with no reliable information on the antebellum trends in the wealth share and is even left uncertain about the trend for the top 10 percent during
the Gilded Age (1870–1916). This is noteworthy because Piketty spends the bulk of his attention devoted to America discussing the nineteenth-century trends (Piketty 2014: 347–50).

The heavily manipulated twentieth-century data for the top 1 percent share, the lack of empirical support for the top 10 percent share, the lack of clarity about the procedures used to harmonize and average the data, the insufficient documentation, and the spreadsheet errors are more than annoying. Together they create a misleading picture of the dynamics of wealth inequality. They obliterate the intradecade movements essential to an understanding of the impact of political and financial-market shocks on inequality. Piketty’s estimates offer no help to those who wish to understand the impact of inequality on “the way economic, social, and political actors view what is just and what is not” (Piketty 2014: 20).

One of the reasons Piketty’s book received such acclaim is that it fed into concerns about rising inequality and it’s important to note that Sutch is not claiming that inequality hasn’t risen. Indeed, in some cases, Sutch argues that it has risen more than Piketty claims. Sutch is rather a journeyman of economic history upset not about Piketty’s conclusions but about the methods Piketty used to reach those conclusions.