Fed data show large banks are keeping a disproportionate amount in reserves, relative to their assets. The 25 largest US banks held an average of 8 per cent of their total assets in reserves at the end of the second quarter, versus 6 per cent for all other banks. Meanwhile, the four largest US banks — JPMorgan Chase, Bank of America, Citigroup and Wells Fargo — together held $377bn in cash reserves at the end of the second quarter this year, far more than the remaining 21 banks in the top 25.
Since the financial crisis, large banks have been obliged to meet a liquidity coverage ratio (LCR) — a portion of high-quality assets such as cash reserves and Treasuries that can be sold quickly to keep the lights on for a month in a crisis. But regulations also require them to track intraday liquidity — cash they can immediately access — which does not include Treasuries. This additional requirement can vary depending on their business models, which in turn inform supervisors’ and examiners’ bank-specific demands. Executives at several large banks say this puts a de facto premium on reserves that varies by bank..
Second-quarter data from the four largest reserve holders show Wells Fargo held 39 per cent of its high-quality liquid assets in reserves. JPMorgan held 22 per cent, Bank of America held 15 per cent and Citigroup 14 per cent.
“If you have a very large concentration in a few institutions and you lose one or two on any day, then you are losing a major portion of your funding,” said Jim Tabacchi, chief executive at South Street Securities, a broker dealer active in short-term debt markets. “Rates have to skyrocket. It’s simple math.”
Here is the full FT article.
That is the theme of my latest Bloomberg column, note that the idea would to some extent cut private banks out of the intermediation equation. Here is one excerpt:
An alternative scenario is that the central bank decides to enter the commercial lending business, much as your current bank does. Will the central bank be a better lender than the private banks? Probably not. Central banks are conservative by nature, and have few “roots in the community” as the phrase is commonly understood. The end result would be more funds used to buy Treasury bonds and mortgage securities — highly institutionalized investments — and fewer loans to small and mid-sized local businesses.
The problems run deeper yet. Financial regulation makes a relatively tight distinction between banks and non-banks. Banks have access to the payments system directly and enjoy other privileges, and in return their risk-taking is regulated more heavily (by not only the Fed but also other federal agencies and states). A fintech startup, in contrast, avoids most bank regulations, but it must work through banks to make payments. This division of responsibilities is imperfect, but it has allowed many parts of the U.S. economy to grow and innovate without facing all of the regulations imposed on banks.
This leads to my primary objection to an official government e-currency: It would, in effect, make many more economic institutions more like banks. Over time, those institutions would probably be regulated more like banks, too. For instance, if the Fed is directly transmitting payments made by a private company, it might be wary of credit risk and impose capital and reserve requirements on that company, much as it does on banks. Banks also might complain that they are facing unfair competition, and ask that consistent regulations be imposed. In any case, more of the economy likely will be subject to financial regulation, not just the relatively narrow core of the banking system.
Not all innovation is good innovation.
It’s an ill-wind that blows no good and in Allocating Scarce Organs, Dickert-Conlin, Elder and Teltser find that repealing motorcycle helmet laws generate large increases in the supply of deceased organ transplants. The supply shock, however, is just the experiment that the authors use to measure demand responses. It’s well known that the shortage of transplant organs has led to a long waiting-list. The waiting-list, however, is only the tip of the iceberg. Many people who could benefit from a transplant never bother getting on the list since their prospects are already so low. In addition, some people have access to substitutes for a deceased organ transplant namely a living donor. Finally, there is a quality tradeoff: as more organs become available the quality of the match may increase as people may pass on the first available organ to get a better match. The authors use the supply shock to study all these issues:
We find that transplant candidates respond strongly to local supply shocks, along two dimensions. First, for each new organ that becomes available in a market, roughly five new candidates join the local wait list. With detailed zip code data, we demonstrate that candidates listed in multiple locations and candidates living out-side of the local market disproportionately drive demand responses. Second, kidney transplant recipients substitute away from living-donor transplants. We estimate the largest crowd out of potential transplants from living donors who are neither blood relatives nor spouses, suggesting that these are the marginal cases in which the relative costs of living-donor and deceased-donor transplants are most influential. Taken together, these findings show that increases in the supply of organs generate demand behavior that at least partially offsets a shock’s direct effects. Presumably as a result of this offset, the average waiting time for an organ does not measurably decrease in response to a positive supply shock. However, for livers, hearts, lungs, and pancreases, we find evidence that an increase in the supply of deceased organs increases the probability that a transplant is successful, defined as graft survival. Among kidney transplant recipients, we hypothesize that living donor crowd out mitigates any health outcome gains resulting from increases in deceased-donor transplants.
In other words, increased organ availability increases the quality of the matches for organs that cannot be given by a living donor (hearts, lungs, pancreases, partially liver) but for kidneys some of the benefit of increased organ availability accrues to potential living donors who do not have to donate and this means that match quality does not substantially increase.
The authors also critique the geographic isolation of kidney donation regions. As I wrote when Steve Jobs received a kidney transplant:
Although there is no reason to think that Apple CEO Steve Jobs “jumped the line” to get his recent liver transplant, Jobs did have an advantage: He was able to choose which line to stand in.
Contrary to popular belief, transplant organs are not allocated solely according to medical need. Organs are allocated through a complex system of 58 transplant territories. Patients within each territory typically get first dibs on organs from that territory. That’s great if a patient happens to live in a territory with a lot of organ donors and relatively few demanders, but not so good for a patient living in New York, San Francisco or Los Angeles, where waiting lines are longest.
As a result of these “accidents of geography,” relatively healthy patients in some parts of the country get transplants while sicker patients in other parts of the country die waiting.
In this issue:
Let facts be submitted to a candid world: Ron Michener explains the role of monetary affairs in the hardships that helped to justify the rebellion of the American colonies, and criticizes Farley Grubb’s Journal of Economic History article on the money of colonial New Jersey.
Fads and trends in OECD economic thinking: Using the frequency of terms in the OECD’s Economic Surveys, Thomas Barnebeck Andersen shows how policy ideas in economics changed over time, including ‘demand management,’ ‘incomes policy,’ ‘output gap,’ ‘potential GDP,’ ‘structural unemployment,’ ‘structural reform,’ ‘macroprudential,’ ‘incentives,’ ‘deregulation,’ ‘liberalisation,’ ‘privatisation,’ ‘human capital,’ ‘education,’ and ‘PISA.’
The economics of economics: Using 291 person-year observations from UCSD Econ, Yifei Lyu and Alexis Akira Toda model Econ faculty compensation on publications and citations and find, among other things, no evidence of a gender gap.
The Liberal Tradition in South Africa, 1910–2019: Martin van Staden describes the unique history and current standing of classical liberalism in South Africa, including an extensive account of liberals in the nation’s politics. The article extends the Classical Liberalism in Econ, by Country series to 19 articles.
Lawrence Summers Deserves a Nobel Prize for Reviving the Theory of Secular Stagnation: Julius Probst makes the case, inaugurating the series on Who Should Get the Nobel Prize in Economics, and Why?
Convention defined: We reproduce by permission a large portion of David K. Lewis’s Convention: A Philosophical Study (1969), wherein he defined coordination equilibrium, coordination problem, common knowledge, and convention.
Mizuta’s 1967 checklist of Adam Smith’s library: We reproduce by permission the 1967 checklist created by Hiroshi Mizuta of the titles that were owned by Adam Smith. This checklist (supplemented by a list of additional once-elusive titles) provides a handy means for determining whether a title was in Smith’s personal library.
In a context of monopsony power, wages at the top of the spectrum would be held lower. Corporations wouldn’t then voluntarily distribute them to workers with lower wages. But if firms lacked monopoly power, they wouldn’t be able to retain the gains from that. The gains would be captured as consumer surplus by the firms’ customers. In order to be competitive in the market for their goods and services, firms would have to assert their monopsonist power just to remain competitive by transferring those gains to the consumer.
…it does match with a context where more skilled workers were captured by powerful firms and less skilled workers benefit indirectly as consumers. Maybe labor incomes had less variance because firms back then were more powerful.
That is from Kevin Erdmann.
We investigate the effect of trade integration on interstate military conflict. Our empirical analysis, based on a large panel data set of 243,225 country-pair observations from 1950 to 2000, confirms that an increase in bilateral trade interdependence significantly promotes peace. It also suggests that the peace-promotion effect of bilateral trade integration is significantly higher for contiguous countries that are likely to experience more conflict. More importantly, we find that not only bilateral trade but global trade openness also significantly promotes peace. It shows, however, that an increase in global trade openness reduces the probability of interstate conflict more for countries far apart from each other than it does for countries sharing borders. The main finding of the peace-promotion effect of bilateral and global trade integration holds robust when controlling for the simultaneous determination of trade and peace.
From Lee and Pyun, Does Trade Integration Contribute to Peace?
The lack of growth response to “Washington Consensus” policy reforms in the 1980s and 1990s led to widespread doubts about the value of such reforms. This paper updates these stylized facts by analyzing moderate to extreme levels of inflation, black market premiums, currency overvaluation, negative real interest rates and abnormally low trade shares to GDP. It finds three new stylized facts: (1) policy outcomes worldwide have improved a lot since the 1990s, (2) improvements in policy outcomes and improvements in growth across countries are correlated with each other (3) growth has been good after reform in Africa and Latin America, in contrast to the “lost decades” of the 80s and 90s. This paper makes no claims about causality. However, if the old stylized facts on disappointing growth accompanying reforms led to doubts about economic reforms, new stylized facts should lead to some positive updating of such beliefs.
It’s often said that Australia hasn’t had a recession in nearly 30 years. Paulina Restrepo-Echavarria and Brian Reinbold of the Federal Reserve Bank of St. Louis take a closer look. If a recession is defined as two quarters of negative growth in GDP then the claim is true but if you define a recession as two quarters of negative growth in GDP per capita then there have been three such recessions since 1991: circa 2000-2001, 2005-2006 and 2018-2019.
Most countries, however, have had more recessions when measured in GDP per capita than in GDP and Australia still looks comparatively good on this measure. Moreover, the official definition of a US recession is not two quarters of negative growth in real GDP it’s the more holistic
A significant decline in economic activity spread across the economy, lasting more than a few months, normally visible in real GDP, real income, employment, industrial production, and wholesale-retail sales.
Did Australia have three recession since 1991 by this measure? It’s difficult to say but I would look more to unemployment rates. The following graph shows Australian real GDP growth rates in purple measured quarterly, real GDP per capita in green measured annually and the unemployment rate in blue. (The data is not identical to Restrepo-Echavarria and Reinbold (RER) as I use FRED data and the FRED economists do not!). As per RER the purple line is generally above the green so you are more likely to see recessions in GDP per capita than in GDP. Take a look at the unemployment rate, however. The 2005-2006 Australian “recession” is completely absent in unemployment so I would rule that out. I also do not see any recession as measured by unemployment in 2018-2019, perhaps it is coming but I would rule it out as of today. The unemployment measure clearly identifies recessions circa 2001-2002 which agrees with RER and also in 2008-2009 where RER do not identify a recession!. Thus, the RER identification of recessions doesn’t work very well as it has both false positives and false negatives.
On the larger issue of Australian economic performance, at worst, I would identify two mild recessions since 1991, circa 2001-2002 and 2008-2009. Now look again at the graph. The shading is US recessions! The Australian and US economies are united enough and subject to similar enough shocks that US recession dating clearly picks out Australian recessions as measured by increases in unemployment rates.
The bottom line is that however you measure it, Australian performance looks very good. Moreover RER are correct that one of the reasons for strong Australian economic performance is higher population growth rates. It’s not that higher population growth rates are masking poorer performance in real GDP per capita, however, it’s more in my view that higher population growth rates are contributing to strong performance as measured by both real GDP and real GDP per capita.
A study in London found 74.9 per cent of people choose to stand instead of walking, especially on the longer ones. With this ‘stand on the right, walk on the left’ rule, we’re giving up 50 per cent of the space on our escalators for roughly 25 per cent of our commuters.
Look for this problem next time during rush hour where the “standing” side of the escalators ends up with a line of people trying to get on. It may seem counterintuitive, but people who are walking up escalators to save seconds off their commute are actually slowing everyone else down.
Efficiency aside, there’s another reason why walking on escalators might be a bad idea—safety. Escalator accidents are much more common than you think.
Many of those victims were likely walking. A study in Tokyo found almost 60 per cent of escalator accidents between 2013 and 2014 resulted from people using escalators improperly, which includes people walking or running on them.
Here is the full story, via Michelle Dawson. Walking up the escalator remains time efficient, however, if those choosing to walk have much higher valuations of time than those who choose to stand. Might that be the case?
An excellent review, here is the closing bit:
Labeling it a “love letter,” would render Cowen promiscuous (or is it adulterous?) since the letter is addressed to so many businesses. Seen as a fan letter, the book works much better. Cowen, a huge NBA fan, realizes that his hardwood heroes are supposed to be great at basketball. It’s OK to admire them from afar and not an affront if they cannot be your actual friends, as long as they can drain three-pointers. Likewise, with his business anti-heroes.
Do read the whole thing.
In a follow-up paper released Friday, another economist, Adam Ozimek, revisited Mr. Blinder’s analysis to see what had happened over the past decade. Some job categories that Mr. Blinder identified as vulnerable [to offshoring], like data-entry workers, have seen a decline in United States employment. But the ranks of others, like actuaries, have continued to grow.
Over all, of the 26 occupations that Mr. Blinder identified as “highly offshorable” and for which Mr. Ozimek had data, 15 have added jobs over the past decade and 11 have cut them. Altogether, those occupations have eliminated fewer than 200,000 jobs over 10 years, hardly the millions that many feared. A second tier of jobs — which Mr. Blinder labeled “offshorable” — has actually added more than 1.5 million jobs.
But Mr. Blinder didn’t miss the mark entirely, said Mr. Ozimek, who is chief economist at Upwork, an online platform for hiring freelancers. The new study found that in the jobs that Mr. Blinder identified as easily offshored, a growing share of workers were now working from home. Mr. Ozimek said he suspected that many more were working in satellite offices or for outside contractors, rather than at a company’s main location. In other words, technology like cloud computing and videoconferencing has enabled these jobs to be done remotely, just not quite as remotely as Mr. Blinder and many others assumed.
Here is more from Ben Casselman (NYT).
Observing India tends to make people more libertarian. At least parts of the private sector are quite vibrant, and the heavy hand of government can be seen in many places. Plus you might think “the country is too big in the first place,” so you will be thinking in terms of decentralization, and devolving power to the states and union territories, rather than strengthening the central authority.
Observing Pakistan tends to make people more statist. The private sector has fewer well-known successes. The central authority appears too weak, and problems with insufficient tax revenue are extreme, even for a developing economy. As for federal income tax, there are only about 1.2 million active taxpayers, in a country of over 200 million people. The very pleasant Islamabad aside, urban public goods seem underprovided, even relative to Indian cities.
It is an interesting question which countries at least seem to provide evidence for which sets of political views.
Most recently, the city has been beset by a plague of flies — a “bullying force,” says the New York Times, “sparing no one.” The swarm of flies, which I was fortunate enough to miss, was the result of monsoon season, malfunctioning drainage systems clogged with solid waste, and slaughtered animals from the Muslim celebration of Eid. (The same monsoon season, by the way, led to power blackouts of up to 60 hours.) On a livability index, Karachi ranks near the bottom, just ahead of Damascus, Lagos, Dhaka and Tripoli.
There is no subway, and a typical street scene blends cars, auto-rickshaws, motorbikes and the occasional donkey pulling a cart. It’s fun for the visitor, but I wouldn’t call transportation easy.
And yet to see only those negatives is to miss the point. Markets speak more loudly than anecdotes, and the population of Karachi continues to rise — a mark of the city’s success. This market test is more important than the aesthetic test, and Karachi unambiguously passes it.
Most of all, I am impressed by the tenacity of Pakistan. Before going there, I was very familiar with the cliched claim that Pakistan is a fragile tinderbox, barely a proper country, liable to fall apart any moment and collapse into civil war. Neither my visit nor my more focused reading has provided any support for that view, and perhaps it is time to retire it. Pakistan’s national identity may be strongly contested but it is pretty secure, backed by the growing use of Urdu as a national language — and cricket to boot. It has come through the Afghan wars battered but intact.
That is all from my longer than usual Bloomberg column, all about Karachi.
Excellent throughout, Alain put on an amazing performance for the live audience at the top floor of the Observatory at the old World Trade Center site. Here is the audio and transcript, most of all we talked about cities. Here is one excerpt:
COWEN: Will America create any new cities in the next century? Or are we just done?
BERTAUD: Cities need a good location. This is a debate I had with Paul Romer when he was interested in charter cities. He had decided that he could create 50 charter cities around the world. And my reaction — maybe I’m wrong — but my reaction is that there are not 50 very good locations for cities around the world. There are not many left. Maybe with Belt and Road, maybe the opening of Central Asia. Maybe the opening of the ocean route on the northern, following the pole, will create the potential for new cities.
But cities like Singapore, Malacca, Mumbai are there for a good reason. And I don’t think there’s that many very good locations.
COWEN: Or Greenland, right?
BERTAUD: Yes. Yes, yes.
COWEN: What is your favorite movie about a city? You mentioned a work of fiction. Movie — I’ll nominate Escape from New York.
Here is more:
COWEN: Your own background, coming from Marseille rather than from Paris —
BERTAUD: I would not brag about it normally.
COWEN: But no, maybe you should brag about it. How has that changed how you understand cities?
BERTAUD: I’m very tolerant of messy cities.
COWEN: Messy cities.
COWEN: Why might that be, coming from Marseille?
BERTAUD: When we were schoolchildren in Marseille, we were used to a city which has a . . . There’s only one big avenue. The rest are streets which were created locally. You know, the vernacular architecture.
In our geography book, we had this map of Manhattan. Our first reaction was, the people in Manhattan must have a hard time finding their way because all the streets are exactly the same.
BERTAUD: In Marseille we oriented ourselves by the angle that a street made with another. Some were very narrow, some very, very wide. One not so wide. But some were curved, some were . . . And that’s the way we oriented ourselves. We thought Manhattan must be a terrible place. We must be lost all the time.
COWEN: And what’s your best Le Corbusier story?
BERTAUD: I met Le Corbusier at a conference in Paris twice. Two conferences. At the time, he was at the top of his fame, and he started the conference by saying, “People ask me all the time, what do you think? How do you feel being the most well-known architect in the world?” He was not a very modest man.
BERTAUD: And he said, “You know what it feels? It feels that my ass has been kicked all my life.” That’s the way he started this. He was a very bitter man in spite of his success, and I think that his bitterness is shown in his planning and some of his architecture.
COWEN: Port-au-Prince, Haiti — overrated or underrated?
Strongly recommended, and note that Bertaud is eighty years old and just coming off a major course of chemotherapy, a remarkable performance.
Again, I am very happy to recommend Alain’s superb book Order Without Design: How Markets Shape Cities.
I recently wrote a post, Short Selling Reduces Crashes about a paper which used an unusual random experiment by the SEC, Regulation SHO (which temporarily lifted short-sale constraints for randomly designated stocks), as a natural experiment. A correspondent writes to ask whether I was aware that Regulation SHO has been used by more than fifty other studies to test a variety of hypotheses. I was not! The problem is obvious. If the same experiment is used multiple times we should be imposing multiple hypothesis standards to avoid the green jelly bean problem, otherwise known as the false positive problem. Heath, Ringgenberg, Samadi and Werner make this point and test for false positives in the extant literature:
Natural experiments have become an important tool for identifying the causal relationships between variables. While the use of natural experiments has increased the credibility of empirical economics in many dimensions (Angrist & Pischke, 2010), we show that the repeated reuse of a natural experiment significantly increases the number of false discoveries. As a result, the reuse of natural experiments, without correcting for multiple testing, is undermining the credibility of empirical research.
.. To demonstrate the practical importance of the issues we raise, we examine two extensively studied real-world examples: business combination laws and Regulation SHO. Combined, these two natural experiments have been used in well over 100 different academic studies. We re-evaluate 46 outcome variables that were found to be significantly affected by these experiments, using common data frequency and observation window. Our analysis suggests that many of the existing findings in these studies may be false positives.
There is a second more subtle problem. If more than one of the effects are real it calls into question the exclusion restriction.To identify the effect of X on Y1 we need to assume that X influences Y1 along only one path. But if X also influences Y2 that suggests that there might be multiple paths from X to Y1. Morck and Young made this point many years ago, likening the reuse of the same instrumental variables to a tragedy of the commons.
Solving these problems is made especially difficult because they are collective action problems with a time dimension. A referee that sees a paper throw the dice multiple times may demand multiple hypothesis and exclusion test corrections. But if the problem is that there are many papers each running a single test, the burden on the referee to know the literature is much larger. Moreover, do we give the first and second papers a pass and only demand multiple hypothesis corrections for the 100th paper? That seems odd, although in practice it is what happens as more original papers can get published with weaker methods (collider bias!).
As I wrote in Why Most Published Research Findings are False we need to address these problems with a variety of approaches:
1) In evaluating any study try to take into account the amount of background noise. That is, remember that the more hypotheses which are tested and the less selection [this is one reason why theory is important it strengthens selection, AT] which goes into choosing hypotheses the more likely it is that you are looking at noise.
2) Bigger samples are better. (But note that even big samples won’t help to solve the problems of observational studies which is a whole other problem).
3) Small effects are to be distrusted.
4) Multiple sources and types of evidence are desirable.
5) Evaluate literatures not individual papers.
6) Trust empirical papers which test other people’s theories more than empirical papers which test the author’s theory.
7) As an editor or referee, don’t reject papers that fail to reject the null.