The AEA’s New Data Policy

The AEA has long had a data repository but no one was responsible for examining the data or replicating a paper’s results and confidential data was treated as an exception. All that is about to change. The AEA has hired a Data Editor, Lars Vilhuber. Vilhuber will be responsible for verifying that the author’s code produces the claimed results from the given data. In some cases Vilhuber will even verify results from raw data all the way to table output.

The new data policy is a significant increase in the requirements to publish in an AEA journal. It takes an immense amount of work to document in a replicable way every step of the empirical process. It’s all to the good, of course, but it is remarkable how little economists train our students in these techniques and make no mistake writing code to be replicable from day one is an art and a science and it needs to be part of the econometrics sequence. All hail Gentzkow and Shapiro!

Here’s more information:

On July 10, 2019, the Association adopted an updated Data and Code Availability Policy, which can be found at https://www.aeaweb.org/journals/policies/data-code. The goal of the new policy is to improve the reproducibility and transparency of materials supporting research published in the AEA journals by providing improved guidance on the types of materials required, increased quality control, and more review earlier in the publication process.

What’s new in the policy? Several items of note:

  • A central role for the AEA Data Editor. The inaugural Data Editor was appointed in January 2018 and will oversee the implementation of the new policy.

  • The policy now clearly applies to code as well as data and explains how to proceed when data cannot be shared by an author. The Data Editor will regularly ask for the raw data associated with a paper, not just the analysis files, and for all programs that transform raw data into those from which the paper’s results are computed. Replication archives will now be requested prior to acceptance, rather than during the publication process after acceptance, providing more time for the Data Editor to review materials.

  • Will the Data Editor’s team run authors’ code prior to acceptance? Yes, to the extent that it is feasible. The code will need to produce the reported results, given the data provided. Authors can consult a generic checklist, as well as the template used by the replicating teams.

  • Will code be run even when the data cannot be posted? This was once an exemption, but the Data Editor will now attempt to conduct a reproducibility check of these materials through a third party who has access to the (confidential or restricted) data.  Such checks have already been successfully conducted using the protocol outlined here.

India’s Tallest Building Cut Down To Size

The FT writes about the bust in India’s construction sector:

It was meant to be the tallest building in India, with luxury flats, a swimming pool and cinema where billionaires and Bollywood stars could enjoy a life of perfect splendour looking down over the Mumbai skyline.

But the Palais Royale complex now sits unfinished alongside other partially built structures tangled in the megacity’s traffic-choked downtown streets, an apt symbol of a crisis that threatens a key part of India’s financial system.

Part of the problem is cyclic, a shadow banking system that overextended credit and is now having to deleverage. India’s construction sector, however, is also plagued by systematic issues including the fact that major construction projects are invariably sued and thus become entangled with India’s notoriously slow legal system. Drawing on a Brookings India working paper by Gandhi, Tandel, Tabarrok and Ravi the FT notes:

But progress was soon slowed by legal challenges over allegedly unauthorised features, sparking a series of delays….However grand the planned building, Palais Royale’s woes fit a familiar pattern: 30 per cent of real estate projects and half of all built-up space in Mumbai is under litigation, according to a 2019 Brookings India report, with projects taking an average of eight and a half years to complete.

Compensating Kidney Donors

The Trump administration will allow greater compensation for live kidney donors.

Supporting Living Organ Donors.  Within 90 days of the date of this order, the Secretary shall propose a regulation to remove financial barriers to living organ donation.  The regulation should expand the definition of allowable costs that can be reimbursed under the Reimbursement of Travel and Subsistence Expenses Incurred Toward Living Organ Donation program, raise the limit on the income of donors eligible for reimbursement under the program, allow reimbursement for lost-wage expenses, and provide for reimbursement of child-care and elder-care expenses.

While pure compensation is still illegal this goes a long way to recouping costs. In addition the executive order improves the rules that govern the organ procurement organizations with the goal of deceasing the number of wasted organs. Compensating kidney donors is a policy that I have long supported. Together the two changes could save thousands of lives. Even Dylan Matthew, a living organ donor who writes for Vox, is pleased.

Hat tip: Frank McCormick

Is Democracy Doomed?

Democracies are much richer than non-democracies and their wealth has made them the envy of the world. The close correlation between democracy, high GDP per capita, and economic, military, and cultural power has made modernity appear to be a package deal. When people look at rich, powerful countries they typically see a democracy and they think, “I want that.”

At the same time, however, the academic literature on the causal effect of democracy on growth has shown at best weak results. Here is the all-star team of Acemoglu, Naidu, Restrepo, and Robinson (ungated) in the JPE summarizing:

With the spectacular economic growth under nondemocracy in China, the eclipse of the Arab Spring, and the recent rise of populist politics in Europe and the United States, the view that democratic institutions are at best irrelevant and at worst a hindrance for economic growth has become increasingly popular in both academia and policy discourse. For example, the prominent New York Times columnist Tom Friedman (2009) argues that “one-party non democracy certainly has its drawbacks. But when it is led by a reasonably enlightened group of people, as China is today, it can also have great advantages. That one party can just impose the politically difficult but critically important policies needed to move a society forward in the 21st century. ”Robert Barro (1997, 1) states this view even more boldly: “More political rights do not have an effect on growth.”

Although some recent contributions estimate a positive effect of democracy on growth, the pessimistic view of the economic implications of democracy is still widely shared. From their review of the academic literature until the mid-2000s, Gerring et al. (2005, 323) conclude that “the net effect of democracy on growth performance cross-nationally over the last five decades is negative or null.”

Acemoglu et al. continue, “In this paper, we challenge this view.” Indeed, using a multitude of sophisticated econometric strategies, Acemoglu et al. conclude “Democracy Does Cause Growth.” In their sample of 175 countries from 1960 to 2010, Acemoglu et al. find that democracies have a GDP per-capita about four times higher than nondemocracies ($2074 v. $8149). (This is uncorrected for time or other factors.) But how much of this difference is explained by democracy? Hardly any. Acemoglu et al. write:

Our estimates imply that a country that transitions from nondemocracy to democracy achieves about 20 percent higher GDP per capita in the next 25 years than a country that remains a nondemocracy.

In other words, if the average nondemocracy in their sample had transitioned to a democracy its GDP per capita would have increased from $2074 to $2489 in 25 years (i.e. this is the causal effect of democracy, ignoring other factors changing over time). Twenty percent is better than nothing and better than dictatorship but it’s weak tea. GDP per capita in the United States is about 20% higher than in Sweden, Denmark or Germany and 40% higher than in France but I don’t see a big demand in those countries to adopt US practices. Indeed, quite the opposite! If we want countries to adopt democracy, twenty percent higher GDP in 25 years is not a big carrot.

As someone who favors democracy as a limit on government abuse, I find this worrying. One optimistic response is that the nondemocracies that adopt the policies necessary to make a nation rich, such as support for property rights, open markets and the free exchange of ideas, may not be such bad places. These beasts, however, appear to be rare. But if they are truly rare there must be more to the democracy-GDP per capita correlation than Acemoglu et al. estimate. So what are they missing? I am uncertain.

If democracies don’t substantially increase growth, why are they rich? Acemoglu et al. don’t spend time on this question but the answer appears to be reverse causality (from wealth to democracy) and the fact that today’s rich democracies adopted capitalism early. But don’t expect the wealth to democracy link to be everywhere and always true, it’s culturally and historically bound. And catch-up is eliminating the benefits of the head start.

If much of the allure of democracy has been higher GDP per capita then the allure has been a mistake of confusing correlation for causation. A fortunate mistake but a mistake. The literature on democracy and growth implies that there is no reason to reject an alternative history in which the world’s leading industrial economy was a nondemocracy. Nor why we could not see some very rich nondemocracies in the future–nondemocracies that would be as on par with the United States as say Sweden, Denmark and Germany are today. If that happens, the case for democracy will look very much weaker than it does now as the correlation between democracy and wealth will be broken and the causal effect more evident even to those without sophisticated econometrics.

Hat tip: Garett Jones for discussion.

Phonics Based Direct Instruction

Linguist John McWhorter strongly supports phonics and direct instruction:

Now that it’s summer, I have a suggestion for how parents can grant their wee kiddies the magic of reading by Labor Day: Pick up Siegfried Engelmann’s Teach Your Child to Read in 100 Easy Lessons. My wife and I used it a while ago with our then-4-year-old daughter, and after a mere 20 cozy minutes a night, a little girl who on Memorial Day could recognize on paper only the words no and stop and the names of herself and her family members could, by the time the leaves turned, read simple books.

…Engelmann’s book, which he co-wrote with Phyllis Haddox and Elaine Bruner, was first published in the early 1980s, but it was based on work from the late 1960s. That’s when Engelmann was involved in the government-sponsored Project Follow Through, whose summary report compared nine methods for how to teach reading and tracked results on 75,000 children from kindergarten through third grade. The results, though some critics over the years have rejected them on methodological grounds, were clear: The approach that proved most effective was based on phonics—teaching children how to sound words out, letter by letter, rather than encouraging students to recognize words as single chunks, also called the whole-word system. Specifically, the most successful approach supplemented basic phonics with a tightly scripted format emphasizing repetition and student participation, often dubbed “direct instruction.” As I have previously explained for NPR, the results were especially impressive among poor children, including black ones.

…And yet in the education world, Engelmann’s technique is considered controversial.

Here are previous MR posts on Direct Instruction, the teaching method that works even though many teachers don’t like it.

Allegedly Unique Events

One common response to yesterday’s post, What is the Probability of a Nuclear War?, was to claim that probability cannot be assigned to “unique” events. That’s an odd response. Do such respondents really believe that the probability of a nuclear war was not higher during the Cuban Missile Crisis than immediately afterwards when a hotline was established and the Partial Nuclear Test Ban Treaty signed?

Claiming that probability cannot be assigned to unique events seems more like an excuse to ignore best estimates than a credible epistemic position. Moreover, the claim that probability cannot be assigned to “unique” events is testable, as Phillip Tetlock points out in an excellent 80,000 Hours Podcast with Robert Wiblin.

I mean, you take that objection, which you hear repeatedly from extremely smart people that these events are unique and you can’t put probabilities on them, you take that objection and you say, “Okay, let’s take all the events that the smart people say are unique and let’s put them in a set and let’s call that set allegedly unique events. Now let’s see if people can make forecasts within that set of allegedly unique events and if they can, if they can make meaningful probability judgments of these allegedly unique events, maybe the allegedly unique events aren’t so unique after all, maybe there is some recurrence component.” And that is indeed the finding that when you take the set of allegedly unique events, hundreds of allegedly unique events, you find that the best forecasters make pretty well calibrated forecasts fairly reliably over time and don’t regress too much toward the mean.

In other words, since an allegedly unique event either happens or it doesn’t it is difficult to claim that any probability estimate was better than another but when we look at many forecasts each of an allegedly unique event what you find is that some people get more of them right than others. Moreover, the individuals who get more events right approach these questions using a set of techniques and tools that can be replicated and used to improve other forecasters. Here’s a summary from Mellers, Tetlock, Baker, Friedman and Zeckhauser:

In recent years, IARPA (the Intelligence Advanced Research Project Activity), the research wing of the U.S. Intelligence Community, has attempted to learn how to better predict the likelihoods of unique events. From 2011 to 2015, IARPA sponsored a project called ACE, comprising four massive geopolitical forecasting tournaments conducted over the span of four years. The goal of ACE was to discover the best possible ways of eliciting beliefs from crowds and optimally aggregating them. Questions ranged from pandemics and global leadership changes to international negotiations and economic shifts. An example question ,released on September 9, 2011, asked, “Who will be inaugurated as President of Russia in 2012?”…The Good Judgment Project studied over a million forecasts provided by thousands of volunteers who attached numerical probabilities to such events (Mellers, Ungar, Baron, Ramos, Gurcay, et al., 2014; Tetlock, Mellers, Rohrbaugh, & Chen, 2014).

In the ACE tournaments, IARPA defined predictive success using a metric called the Brier scoring rule (the squared deviation between forecasts and outcomes,where outcomes are 0 and 1 for the non-occurrence and occurrence of events, respectively; Brier, 1950). Consider the question, “Will Bashar al-Assad be ousted from Syria’s presidency by the end of 2016?” Outcomes were binary; Assad either stays or he is ousted. Suppose a forecaster predicts that Assad has a 60% chance of staying and a 40% chance of being ousted. If, at the end of 2016, Assad remains in power, the participant’s Brier score would be [(1-.60)^2 + (0-.40)^2] = 0.16. If Assad is ousted, the forecaster’s score is [(0 -.60)^2 + (1 -.40)^2] = 0.36. With Brier scores, lower values are better, and zero is a perfect score.

…The Good Judgment Project won the ACE tournaments by a wide margin each year by being faster than the competition at finding ways to push probabilities toward 0 for things that did not happen and toward 1 for things that did happen. Five drivers of accuracy accounted for Good Judgment’s success.They were identifying, training, teaming, and tracking good forecasters, as well as optimally aggregating predictions. (Mellers, et al., 2014; Mellers, Mellers, Stone, Atanasov, Rohrbaugh, Metz, et al., 2015a; Mellers, Stone, Murray, Minster, Rohrbaugh, et al., 2015b).

What is the Probability of a Nuclear War?

I agree with Tyler who wrote recently that “the risk of nuclear war remains the world’s No. 1 problem, even if that risk does not seem so pressing on any particular day.”

The probability of a nuclear war is inherently difficult to predict but what strikes me in this careful survey by Luisa Rodriguez for the Effective Altruism Forum is how much higher all the expert predictions and model forecasts are compared to what we would like them to be. Keep in mind that the following are annualized probabilities. For a child born today (say 75 year life expectancy) these probabilities (.0117) suggest that the chance of a nuclear war in their lifetime is nearly 60%, (1-(1-.0117)^75). At an annualized probability of .009 which is the probability from accident analysis it’s approximately 50%. See Rodriguez and also Shlosser’s Command and Control on the frightening number of near misses including one nuclear weapon dropped on North Carolina.

These lifetime numbers don’t strike me as crazy, just crazy high. Here is Rodriguez summarizing:

If we aggregate historical evidence, the views of experts and predictions made by forecasters, we can start to get a rough picture of how probable a nuclear war might be.[8] We shouldn’t put too much weight on these estimates, as each of the data points feeding into those estimates come with serious limitations. But based on the evidence presented above, we might think that there’s about a 1.17% chance of nuclear war each year and that the chances of a US-Russia nuclear war may be in the ballpark of 0.39% per year.

Addendum: A number of people in the comments mention that the probabilities are not independent. Of course, but that doesn’t make the total probability calculation smaller, it could be larger.

Is Bitcoin Halal?

Press TV: A report by Iran’s Mehr news agency last week showed that bitcoin miners were using power in buildings and properties that enjoy a lower price for electricity, including factories, greenhouses, government offices and mosques.

…A spokesman of Iran’s Ministry of Energy said on Monday that the country’s power grid had become unstable as a result of increased mining of cryptocurrencies.

Bitcoin mining in a mosque may seem outré but at least it’s not money lenders in the mosque. In fact, Bitcoin is halal, at least according to one source (quoted here):

As a payment network, Bitcoin is halal. In fact, Bitcoin goes beyond what more conventional closed banking networks offer. Unlike conventional bank networks which use private ledgers where there’s no guarantee that the originator actually owns the underlying assets, Bitcoin guarantees with mathematical certainty that the originator of the transfer owns the underlying assets. Conventional banks operate using the principle of fractional reserve, which is prohibited in Islam.

Muhammad was a merchant and much more open to business than some traditional Christian interpretations. For example, compare Jesus, “it is easier for a camel to go through the eye of a needle than for a rich man to enter the kingdom of God” with one of Muhammad’s sayings:

Abu Said related that the Prophet said: The truthful and trustworthy businessman will be in the company of Prophets, saints and martyrs on the Day of Judgment. (Darimi, Tirmidhi)

Prisoners

New Yorker: On May 13, 1943, Axis forces in North Africa surrendered. The Allies suddenly found themselves saddled with nearly three hundred thousand prisoners of war, including the bulk of General Erwin Rommel’s famed Afrika Korps. Unable to feed or house their share, the British asked their American comrades to relieve them of the burden. And so, by the tens of thousands, German soldiers were loaded aboard Liberty Ships, which had carried American troops across the Atlantic. Eventually, some five hundred P.O.W. camps, scattered across forty-five of the forty-eight United States, housed some four hundred thousand men. In every one of those camps, the Geneva conventions were adhered to so scrupulously that, after the war, not a few of the inmates decided to stick around and become Americans themselves. That was extraordinary rendition, Greatest Generation style.

That’s the opening to a piece by Hendrik Hertzberg from 2011 and thus the piece is motivated neither by President Trump nor about separating children from their parents on the border. For that reason it is perhaps more relevant to these issues than otherwise. We can and have been worse but let no one say that we have not and cannot be better.

Hat tip: Jason Kuznicki.

Tearing Up an Economics Textbook

Robert Samuelson, the economics columnist, has written a column titled, It’s time we tear up our economics textbooks and start over. What he actually says is we should tear up Greg Mankiw’s Principles of Economics:

But as a teaching device, [Mankiw’s] “Principles of Economics” has fallen behind. There’s little analysis of the impact of the Internet and digitalization on competition and markets. I couldn’t find either Apple or Facebook in the index; Google gets a few mentions.

Likewise, little attention is paid to the 2007-2009 Great Recession, the worst business downturn since the Great Depression, which also receives scant coverage relative to its significance. (Together, the two recessions receive about three pages, from 725 to 727.)

There’s some misleading information about the Great Recession and parallel financial crisis. On Page 691, we have this: “Today, bank runs are not a major problem for the U.S. banking system or the Fed.” This would surely surprise the Fed, which poured trillions of dollars into the economy to prevent financial collapse.

Mankiw’s assertion can be defended on narrow, technical grounds. There was no run by retail depositors (people like you and me) against commercial banks. We were protected by deposit insurance. But there was a huge run — a panic — by institutional investors (pension funds, hedge funds, insurance companies, endowments) that withdrew funds from traditional banks, investment banks and the commercial paper market.

…Mankiw’s textbook needs more than a touch-up; it needs a major overhaul. It has very little history: for example, the industrialization of the 19th century. Nor is there much about the expansion of the global economy. China gets a few mentions.

The market for principles textbooks, however, is competitive and there are alternatives to Mankiw. Krugman and Wells, for example, have a lot of very interesting boxes on the world economy and historical events. Modern Principles of Economics doesn’t use boxes but we illustrate the principles of economics with historical events and, of course, we use tech companies such as Facebook and Apple to discuss network effects and coordination games. Samuelson is a bit harsh on Mankiw, however, because it’s very easy to overwhelm students with details. Like physics, economics is powerful because it explains many things with a handful of principles. It’s true that Mankiw’s book doesn’t have much history or color–his paradigmatic market is the market for ice cream–but abstraction can focus attention. The tradeoff, of course, is that it can also lead to vanilla economics. But the Mankiw text is clearly written and the micro text is especially well organized, one reason we chose a similar organization for Modern Principles.

In Modern Principles we illustrate the ideas with more interesting markets but we work with them repeatedly so students don’t become overwhelmed. Our paradigmatic market is the market for oil. We use it to teach supply and demand, cartels, and the importance of real macroeconomic shocks. Using the market for oil also lets us teach about some important events in world history such as the OPEC oil crisis and the industrialization of China.

Samuelson is correct that the financial crisis was a run on the shadow banks but he’s incorrect that this isn’t taught to students of Econ 101. Here’s Tyler on the financial crisis. He covers leverage, securitization, asymmetric information, bank runs, fire sales and the rise of the shadow banking system. Students with the right textbook are well informed about the financial crisis and the economic principles that can help us to understand, analyze and perhaps avoid future financial crises.

The Economist covers Why Are the Prices So D*mn High?

The Economist does a very nice job covering Why Are the Prices So D*mn High.

Baumol’s earliest work on the subject, written with William Bowen, was published in 1965. Analyses like that of Messrs Helland and Tabarrok nonetheless feel novel, because the implications of cost disease remain so underappreciated in policy circles. For instance, the steadily rising expense of education and health care is almost universally deplored as an economic scourge, despite being caused by something indubitably good: rapid, if unevenly spread, productivity growth. Higher prices, if driven by cost disease, need not mean reduced affordability, since they reflect greater productive capacity elsewhere in the economy. The authors use an analogy: as a person’s salary increases, the cost of doing things other than work—like gardening, for example—rises, since each hour off the job means more forgone income. But that does not mean that time spent gardening has become less affordable.

It’s an implication of the Baumol effect that everyone ends up working in a low productivity industry!

The only true solution to cost disease is an economy-wide productivity slowdown—and one may be in the offing. Technological progress pushes employment into the sectors most resistant to productivity growth. Eventually, nearly everyone may have jobs that are valued for their inefficiency: as concert musicians, or artisanal cheesemakers, or members of the household staff of the very rich. If there is no high-productivity sector to lure such workers away, then the problem does not arise.

Misunderstanding the Baumol effect can lead to a cure worse than the “disease”:

These possibilities reveal the real threat from Baumol’s disease: not that work will flow toward less-productive industries, which is inevitable, but that gains from rising productivity are unevenly shared. When firms in highly productive industries crave highly credentialed workers, it is the pay of similar workers elsewhere in the economy—of doctors, say—that rises in response. That worsens inequality, as low-income workers must still pay higher prices for essential services like health care. Even so, the productivity growth that drives cost disease could make everyone better off. But governments often do too little to tax the winners and compensate the losers. And politicians who do not understand the Baumol effect sometimes cap spending on education and health. Unsurprisingly, since they misunderstand the diagnosis, the treatment they prescribe makes the ailment worse.

My only complaint is that the excellent reviewer has not followed our lead and called it the Baumol effect–cost disease is a misleading name!

Addendum: Other posts in this series.

Glenn Loury Speaks

On a Thursday evening in April, Glenn Loury is talking about race, ethics, and affirmative action. And he’s getting emotional. “Don’t patronize my people,” he told an audience at the College of the Holy Cross, in Massachusetts. “Don’t judge us by a different standard. Don’t lower the bar! Why are you lowering the bar? What’s going on there? Is that about guilt or pity?” He let the question hang in silence for a moment. “Tell me a pathway to equality that is rooted in either one of those things.”

That’s the opening to a sharp and very candid interview of Glenn Loury by Evan Goldstein in the Chronicle of Higher Education. Loury continues on affirmative action:

Equality is the only legitimate long-term goal — racial equality, not head-counting. I’m talking about equality of dignity, respect, standing, accomplishment, achievement, honor. People have to earn these things. What do I want to do? I want to reorient the discussion around the development of African-American capacities to compete.

On his personal life:

Q: By the late 1990s, you’d broken with many former friends on the right. You’d undergone a political conversion. Where did you land?

A: There’s an arc to this thing, and it’s odd. I describe myself today as right of center.

What happened is that I went through a trauma. I was accused of assaulting a woman with whom I was having an extramarital affair. I was publicly humiliated. I had to withdraw an appointment as undersecretary of education in the last years of Reagan’s second term. I was a crack-cocaine addict; it almost killed me. My wife at the time, God bless her, stayed with me, and we subsequently had two fine sons. But at the time, I was dying.

I found Jesus. I got my life together.

Read the whole thing.

How to Become a Federal Criminal

Mike Chase, author of the excellent twitter feed @CrimeADay, has now written the illustrated handbook, How to Become a Federal Criminal. In truth, a handbook wasn’t necessary because it is very easy to become a federal criminal.

You may know that you are required to report if you are traveling to or from the United States with $10,000 or more in cash. Don’t hop over the Canadian border to buy a used car, for example, or the Feds may confiscate your cash (millions of dollars are confiscated every year). DiImage result for how to become a federal criminal chased you also know that you can’t leave the United States with more than $5 in nickels??? That’s a federal crime punishable by up to five years in prison. How about carrying a metal detector in a national park–up to six months in prison. And God forbid you should use your metal detector and find something more than 100 years old, that can put you away for up to a year. Also illegal in a national park? Making unreasonable gestures to a passing horse.

The expansion of Federal criminal law into every nook and cranny of life can be amusing but there is a darker side.

The feds also have unbelievably powerful tools at their disposal. They can subpoena your bank records, listen to your phone calls, indict you in a secret proceeding called a grand jury, an, if they think you lied to them, they can charge you for that alone. Then, if you can get a jury to find you guilty on just one charge, the judges is allowed to sentence you up to the statutory maximum based on things you were never charged with, or even things a jury acquitted you of, so long as the judge decides you probably did them. (italics added).

Moreover, when anyone can be charged with a crime, the application of criminal law becomes discretionary and that discretion may be used to suppress the free exercise of other rights. Indeed, the recent Supreme Court case, Nieves v. Bartlett, makes it easier for the police to arrest people even if the reason for the arrest is retaliation for lawful behavior.

Slate: The First Amendment makes it unconstitutional for government officials to retaliate against you because they dislike your speech. At the same time, federal law gives you the right to sue state officials for compensation if they violate constitutional rights such as your right to free speech. But on Tuesday, the Supreme Court invented a rule that will often allow police officers to arrest people in retaliation for disfavored speech without liability.

….Because local laws are full of minor infractions, like “loitering,” that are frequently violated without incident, police will often have a pretext to arrest people engaged in speech the officers don’t like. By immunizing such abuse, Nieves may have devastating effects on demonstrators, press photographers, and anyone who wants to exercise their speech rights in public, like the right to film the police or verbally challenge officer misconduct. The power to arrest is a potent tool for suppressing speech because even if charges are later dropped, arrestees must undergo the ordeal—and dangers—of being booked and jailed, and they may have to disclose the arrest on future job and housing applications, among other ramifications.

SlateStarCodex and Caplan on ‘Why Are the Prices So D*mn High?’

SlateStarCodex, whose 2017 post on the cost disease was one of the motivations for our investigation, says Why Are the Prices so D*mn High (now available in print, ePub, and PDF) is “the best thing I’ve heard all year. It restores my faith in humanity.” I wouldn’t go that far.

SSC does have some lingering doubts and points to certain areas where the data isn’t clear and where we could have been clearer. I think this is inevitable. A lot has happened in the post World War II era. In dealing with very long run trends so much else is going on that answers will never be conclusive. It’s hard to see the signal in the noise. I think of the Baumol effect as something analogous to global warming. The tides come and go but the sea level is slowly rising.

In contrast, my friend Bryan Caplan is not happy. Bryan’s basic point is to argue, ‘look around at all the stupid ways in which the government prevents health care and education prices from falling. Of course, government is the explanation for higher prices.’ In point of fact, I agree with many of Bryan’s points. Bryan says, for example, that immigration would lower health care prices. Indeed it would. (Aside: it does seem odd for Bryan to argue that if K-12 education were privately funded schools would not continue their insane practice of requiring primary school teachers to have B.A.s when in fact, as Bryan knows, credentialism has occurred throughout the economy)

The problem with Bryan’s critiques is that they miss what we are trying to explain which is why some prices have risen while others have fallen. Immigration would indeed lower health care prices but it would also lower the price of automobiles leaving the net difference unexplained. Bryan, the armchair economist, has a simple syllogism, regulation increases prices, education is regulated, therefore regulation explains higher education prices. The problem is that most industries are regulated. Think about the regulations that govern the manufacture of automobiles. Why do all modern automobiles look the same? As Car and Driver puts it:

In our hyperregulated modern world, the government dictates nearly every aspect of car design, from the size and color of the exterior lighting elements to how sharp the creases stamped into sheet metal can be.

(See Jeffrey Tucker for more). And that’s just design regulation. There are also environmental regulations (e.g. ethanol, catalytic converters, CAFE etc.), engine regulations, made in America regulations, not to mention all the regulations on the inputs like steel and coal. The government even regulates how cars can be sold, preventing Tesla from selling direct to the public! When you put all these regulations together it’s not at all obvious that there is more regulation in education than in auto manufacturing. Indeed, since the major increase in regulation since the 1970s has been in environmental regulation, which impacts manufacturing more than services, it seems plausible that regulation has increased more for auto manufacturing.

As an empirical economist, I am interested in testable hypotheses. A testable hypothesis is that the industries with the biggest increases in regulation have seen the biggest increases in prices over time. Yet, when we test that hypothesis as best we can it appears to be false. Remember, this does not mean that regulation doesn’t increase prices! It can and probably does it’s just that regulation is not the explanation for the differences in prices we see across industries. (Note also that Bryan argues that you don’t need increasing regulation to explain increasing prices, which is true, but I still need a testable hypotheses not an unfalsifiable claim.)

So by all means let’s deregulate, but don’t expect 70+ year price trends to reverse until robots and AI start improving productivity in services faster than in manufacturing.

Let me close with this. What I found most convincing about the Baumol effect is consilience. Here, for example, are two figures which did not make the book. The first shows car prices versus car repair prices. The second shows shoe and clothing prices versus shoe repair, tailors, dry cleaners and hair styling. In both cases, the goods price is way down and the service price is up. The Baumol effect offers a unifying account of trends such as this across many different industries. Other theories tend to be ad hoc, false, or unfalsifiable.

 


Addendum: Other posts in this series.