Alex Tabarrok

Here’s Four Reasons Financial Intermediaries Fail the latest video from our Principles of Macroeconomics class at Marginal Revolution University.

As always, these videos go great with our superb textbook, Modern Principles of Economics, but they can be used with any textbook. In fact, if you teach economics and want to incorporate video into any of your classes then check out our syllabus service. Just drop our instructional designer, Mary Clare Peate, an email and she will suggest some videos that map directly to your syllabus.

Land use regulations raise prices, reduce mobility and increase income inequality in the United States. In many parts of the developing world, however, the situation is worse, much worse.

In an excellent piece Shanu Athiparambath writes:

Land is not scarce in Delhi, as I learned in one of those days, when a friend drove me around the city. There is enough land for everybody to live in a mansion. Delhi has nearly 20,000 parks and gardens. Large tracts of land remain idle or underutilized, either because the government owns it, or because property titles are weak. Politicians and senior bureaucrats live in mansions with vast, manicured lawns in the core of the city. Some of these political eminentoes farm on valuable urban land while firms and households move to the periphery or satellite cities where real estate prices are lower. So the average commute is long, roads are too congested, and Delhi is one of the most polluted cities in the world.

Zoning regulations inflict great harm. But it is difficult for Americans to imagine the cost of zoning in Indian cities. Delhi is one of the most crowded cities in the world, and there is great demand for floor space. But real estate developers are not allowed to build tall buildings. In Delhi, for apartment buildings, the regulated Floor Area Ratio (FAR) is usually 2. FAR, an urban planning concept, is the ratio of built-out floor space to the area of the plot.

This means, in Delhi, developers are not allowed to build more than 2,000 square feet of floor space on a 1,000 square feet plot. If a building stands on the whole plot, this would be a two-storey building.

To understand the harm this inflicts on the world’s second-most populous city, remember that in Midtown Manhattan, FAR can go up to 15. In Los Angeles, it can get as high as 13, and in Chicago, up to 12. In Hong Kong’s downtown, the highest FAR is 12, in Bahrain it is 17, and in Singapore it can get as high as 25. Not surprisingly, office space in Delhi’s downtown is among the most expensive in the world. It is impossible to profitably redevelop these crumbling buildings in Delhi’s downtown because they are under rent control.

You might expect the capital city to be especially restrictive, just as is Washington, DC, but in Mumbai, the densest major city in the world, the downtown FAR is an absurdly low 1.33.

Think about it like this: A FAR is like a tax on manufacturing land. Why would you impose prohibitive taxes in places where land is most desperately needed?

Metro Station Smoke

WTOP: A Metro worker blamed for falsifying records about the tunnel fans that failed during last year’s deadly smoke incident near L’Enfant Plaza has been granted his job back by an arbitration panel — and Metro’s largest union has just filed a lawsuit against Metro because the worker hasn’t been reinstated yet.

The union’s defense is that everyone was doing it so no one is to blame. The Union is probably right that the WMTA suffers from a culture of poor safety and responsibility but you can’t fix that culture without clear signals that the incentives have changed.

I had to take the Metro to DC earlier this week and due to track closings for safety improvements it was miserable, at least 45 minutes of delays for the roundtrip. Some 700,000 people ride the metro every day and if each is delayed by just 15 minutes total (7.5 minutes each way) then at $15 an hour that’s 2.6 million dollars worth of delay every day.

Results Free Review

by on July 20, 2016 at 7:28 am in Economics, Education | Permalink

If researchers test a hundred hypotheses, 5% will come up “statistically significant” even when the true effect in every case is zero. Unfortunately, the 5% of papers with statistically signficant results are more likely to be published, especially as these results may seem novel, surprising or unexpected–this is the problem of publication bias.

A potentially simple and yet powerful way to mitigate publication bias is for journals to commit to publish manuscripts without any knowledge of the actual findings. Authors might submit sophisticated research designs that serve as a registration of what they intend to do. Or they might submit already completed studies for which any mention of results is expunged from the submitted manuscript. Reviewers would carefully analyze the theory and research design of the article. If they found that the theoretical contribution was justifiably large and the design an appropriate test of the theoretical logic, then reviewers could recommend publication regardless of the final outcome of the research.

In a new paper (from which the above is quoted) the editors of a special issue of Comparative Political Studies report on an experiment using results-free review. Results-free review worked well. The referees spent a lot of time and effort thinking about theory and research design and the type of institutional and area-specific knowledge that would be necessary to make the results compelling. The quality of the submitted papers was high.

What the editors found, however, was that the demand for “significant” results was very strong and difficult to shake.

It seems especially difficult for referees and authors alike to accept that null findings might mean that a theory has been proved to be unhelpful for explaining some phenomenon, as opposed to being the result of mechanical problems with how the hypothesis was tested (low power, poor measures, etc.). Making this distinction, of course, is exactly the main benefit of results free peer review. Perhaps the single most compelling argument in favor of results-free peer review is that it allows for findings of non-relationships. Yet, our reviewers pushed back against making such calls. They appeared reluctant to endorse manuscripts in which null findings were possible, or if so, to interpret those null results as evidence against the existence of a hypothesized relationship. For some reviewers, this was a source of some consternation: Reviewing manuscripts without results made them aware of how they were making decisions based on the strength of findings, and also how much easier it was to feel “excited” by strong findings This question even led to debate among the special issue editors on what are the standards for publishing a null finding?

I’ve seen this aversion to null results. In my paper with Goldschlag on regulation and dynamism, we find that regulation does not much influence standard measures of dynamism. It’s been very hard for reviewers to accept this result and I don’t think it’s simply because some referees believe strongly that regulation reduces dynamism. I think referees would be more likely to accept the exact same paper if the results were either negative or positive. That’s unscientific–indeed, we should expect that most results are null results so this should give us, if anything, even more confidence in the paper!–but as the above indicates, it’s a very common reaction that null results indicate something is amiss.

Here, by the way, are the three papers reviewed before the results were tabulated. I suspect that some of these papers would not have been accepted at this journal under a standard refereeing system but that all of these papers are of above average quality.

The Effects of Authoritarian Iconography: An Experimental Test finds “no meaningful evidence that authoritarian iconography increases political compliance or support for the Emirati regime.”

Can Politicians Police Themselves? “Taking advantage of a randomized natural experiment embedded in Brazil’s State Audit Courts, we study how variation in the appointment mechanisms for choosing auditors affects political accountability. We show that auditors appointed under few constraints by elected officials punish lawbreaking politicians—particularly co-partisans—at lower rates than bureaucrats insulated from political influence. In addition, we find that even when executives are heavily constrained in their appointment of auditors by meritocratic and professional requirements, auditors still exhibit a pro-politician bias in decision making. Our results suggest that removing bias requires a level of insulation from politics rare among institutions of horizontal accountability.”

Banners, Barricades, and Bombs tests “competing theories about how we should expect the use of tactics with varying degrees of extremeness—including demonstrations, occupations, and bombings—to influence public opinion. We find that respondents are less likely to think the government should negotiate with organizations that use the tactic of bombing when compared with demonstrations or occupations. However, depending on the outcome variable and baseline category used in the analysis, we find mixed support for whether respondents think organizations that use bombings should receive less once negotiations begin. The results of this article are generally consistent with the theoretical and policy-based arguments centering around how governments should not negotiate with organizations that engage in violent activity commonly associated with terrorist organizations.”

Addendum: See also Robin Hanson’s earlier post on conclusion free review.

The FDA versus the Tooth

by on July 15, 2016 at 7:20 am in Economics, Law, Medicine | Permalink

The NYTimes has an incredible story on a simple, paint-on liquid that stops tooth decay and prevents further cavities:

Nobody looks forward to having a cavity drilled and filled by a dentist. Now there’s an alternative: an antimicrobial liquid that can be brushed on cavities to stop tooth decay — painlessly.

The liquid is called silver diamine fluoride, or S.D.F. It’s been used for decades in Japan, but it’s been available in the United States, under the brand name Advantage Arrest, for just about a year.

The Food and Drug Administration cleared silver diamine fluoride for use as a tooth desensitizer for adults 21 and older. But studies show it can halt the progression of cavities and prevent them, and dentists are increasingly using it off-label for those purposes.

Ari Armstrong has the right reaction:

So the Japanese have been using this drill-free treatment for “decades,” yet we in the United States have had to wait until last year to get it. And the only reason we can get it now to treat cavities is that it happens to be allowed as on “off-label” use for what the FDA officially approved it for.

The NYTimes continues:

Silver diamine fluoride is already used in hundreds of dental offices. Medicaid patients in Oregon are receiving the treatment, and at least 18 dental schools have started teaching the next generation of pediatric dentists how to use it.

…The main downside is aesthetic: Silver diamine fluoride blackens the brownish decay on a tooth. That may not matter on a back molar or a baby tooth that will fall out, but some patients are likely to be deterred by the prospect of a dark spot on a visible tooth.

…[But] “S.D.F. reduces the incidence of new caries and progression of current caries by about 80 percent,” said Dr. Niederman, who is updating an evidence review of silver diamine fluoride published in 2009.

Fillings, by contrast, do not cure an oral infection.

But as Armstrong writes the craziest part of the story is this:

American dentists first started using similar silver-based treatments in the early 1900s. The FDA is literally over a century behind the times.

It seems that the future of dental treatment has been here all along but a combination of dentists wanting to be surgeons, lost knowledge, and FDA cost and delay prevented it from being distributed. Incredible.

Kidney Gift Vouchers

by on July 13, 2016 at 9:20 am in Economics, Medicine | Permalink

I am not expecting a market in kidneys anytime soon but ever more sophisticated barter is slowly improving kidney allocation. Most recently, UCLA has started a program where a kidney donation may be swapped for a kidney gift certificate good for a kidney transplant at a time of the recipient’s choosing.

The program allows for living donors to donate a kidney in advance of when a friend or family member might require a kidney transplant.

…“It’s the brainchild of a grandfather who wanted to donate a kidney to his grandson nearing dialysis dependency, but the grandfather felt he would be too old to donate in a few years when his grandson would likely need a transplant.”

Nine other transplant centers across the U.S. have agreed to offer the gift certificate program, under the umbrella of the National Kidney Registry’s advanced donation program. Veale anticipates that more living donors will come forward to donate kidneys, which could trigger chains of transplants. Then, when a patient redeems his or her gift certificate, the last donor in the chain could donate a kidney to that recipient.

Improving allocation is important but the real constraint today is supply. This program may help with that on the margin, however, because altruistic donors could donate and keep a gift certificate as insurance in case any of their family members one day needed an transplant. More fundamentally, however, increasing supply will require some form of compensation or incentive such as no-give, no-take.

It’s well known that among college and university faculty, liberals outnumber conservatives. Sam Abrams at Heterodox Academy presents some typical data:

The liberal-conservative ratio among faculty was roughly 2 to 1 in 1995. By 2004 that figure jumped to almost 3 to 1. While seemingly insignificant, that represents a 50% decline in conservative identifiers on campuses. After 2004, the ratio changed even more dramatically and by 2010, was close to 5 to 1 nationally. This shows that political diversity declined rapidly in our nation’s centers for learning and social change.

What’s more surprising is how extreme the difference is in one part of the country: New England. For college and university faculty in Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, and Vermont – the liberal to conservative ratio is above 25 to 1!

In the figure below the liberal to conservative ratio is graphed for faculty in New England and in the rest of the country. The green line at the bottom graphs the ratio in the population at large. Universities everywhere are not as balanced as the general population but New England is like another country.

Abrams-Fig-3
Do conservative professors face discrimination? Defenders of the universities have argued, sometimes quite cogently (but compare), that professors tend to be more liberal than the general population not because of discrimination but because of factors like education, income, or social class. The universities can hardly be blamed if the people who want to become professors tend to be liberal! But large geographic differences in the ratio of liberals to conservatives suggests that this may not be the full story. Somehow I suspect that conservatives professors would be quite happy to live and work in New England should they be offered jobs in that part of the country.

Very few people imagined that self-driving cars would advance so quickly or be deployed so rapidly. As a result, robot cars are largely unregulated. There is no government testing regime or pre-certification for robot cars, for example. Indeed, most states don’t even require a human driver because no one imagined that there was an alternative. Many people, however, are beginning to question laissez-faire in light of the first fatality involving a partially-autonomous car that occurred in May and became public last week. That would be a mistake. The normal system of laissez-faire is working well for robot cars.

Laissez-faire for new technologies is the norm. In the automotive world, for example, new technologies have been deployed on cars for over a hundred years without pre-certification including seatbelts, air bags, crumple zones, abs braking systems, adaptive cruise control and lane departure and collision warning systems. Some of these technologies are now regulated but regulation came after these technologies were developed and became common. Airbags began to be deployed in the 1970s, for example when they were not as safe as they are today but airbags improved over time and by the 1990s were fairly common. It was only in 1998, long after they were an option and the design had stabilized, that the Federal government required airbags in all new cars.

Lane departure and collision warning systems, among other technologies, remain largely unregulated by the Federal government today. All technologies, however, are regulated by the ordinary rules of tort (part of the laissez-faire system). The tort system is imperfect but it works tolerably well especially when it focuses on contract and disclosure. Market regulation also occurs through the insurance companies. Will insurance companies given a discount for self-driving cars? Will they charge more? Forbid the use of self-driving cars? Let the system evolve an answer.

Had burdensome regulations been imposed on airbags in the 1970s the technology would have been delayed and the net result could well have been more injury and death. We have ignored important tradeoffs in drug regulation to our detriment. Let’s avoid these errors in the regulation of other technologies.

The fatality in May was a tragedy but so were the approximately 35,000 other traffic fatalities that occurred last year without a robot at the wheel. At present, these technologies appear to be increasing safety but even more importantly what I have called the glide path of the technology looks very good. Investment is flowing into this field and we don’t want to forestall improvements by raising costs now or imposing technological “fixes” which could well be obsolete in a few years.

Laissez-faire is working well for robot cars. Let’s avoid over-regulation today so that in a dozen years we can argue about whether all cars should be required to be robot cars.

My thoughts on Independence Day are more muted this year than they have been in the past. In the first half of my life I saw the Berlin Wall fall and I watched as democracy, trade, and greater freedom spread around the world. There was still plenty wrong, of course, especially for a libertarian, but the world was on an upswing and it seemed like the ideas that led to the economic, political and social destruction of the first half of the twentieth century were in decline. Now, following the second Great Depression, illiberalism is on the rise much as it rose following the first Great Depression. All could yet turn out well but there is no denying that the world is no longer on an upswing.

In Freedom in the World: 2016, Freedom House reports:

The world was battered by crises that fueled xenophobic sentiment in democratic countries, undermined the economies of states dependent on the sale of natural resources, and led authoritarian regimes to crack down harder on dissent….

  • The number of countries showing a decline in freedom for the year—72—was the largest since the 10-year slide began. Just 43 countries made gains.
  • Over the past 10 years, 105 countries have seen a net decline, and only 61 have experienced a net improvement.
  • Ratings for the Middle East and North Africa region were the worst in the world in 2015, followed closely by Eurasia.
  • Over the last decade, the most significant global reversals have been in freedom of expression and the rule of law.

Freedom in the World has now declined for the 10th year in a row.

Optical Illusion of the Year

by on July 3, 2016 at 4:55 pm in Games, Science | Permalink

The British public wants the right to work in the EU but they don’t want EU citizens to have the right to work in the UK.

EU Poll

This was from a poll taken in 2014 that presciently illustrated some of today’s confusions and misgivings.

Hat tip: Lones Smith.

Popular Science: A pilot A.I. developed by a doctoral graduate from the University of Cincinnati has shown that it can not only beat other A.I.s, but also a professional fighter pilot with decades of experience. In a series of flight combat simulations, the A.I. successfully evaded retired U.S. Air Force Colonel Gene “Geno” Lee, and shot him down every time. In a statement, Lee called it “the most aggressive, responsive, dynamic and credible A.I. I’ve seen to date.”

What’s the most important part of this paragraph? The fact that an AI downed a professional fighter pilot? Or the fact that the AI was developed by a graduate student?

In the research paper the article is based on the authors note:

…given an average human visual reaction time of 0.15 to 0.30 seconds, and an even longer time to think of optimal plans and coordinate them with friendly forces, there is a huge window of improvement that an Artificial Intelligence (AI) can capitalize upon.

The AI was running on a $35 Raspberry Pi.

AI pilots can plan and react far quicker than human pilots but that is only half the story. Once we have AI pilots, the entire plane can be redesigned. We can build planes today that are much faster and more powerful than anything that exists now but the pilots can’t take the G-forces even with g-suits, AIs can. Moreover, AI driven planes don’t need ejector seats, life-support, canopies or as much space as humans.

The military won’t hesitate to deploy these systems for battlefield dominance so now seems like a good time to recommend Concrete Problems in AI Safety, a very important paper written by some of the world’s leading researchers in artificial intelligence. The paper examines practical ways to design AI systems so they don’t run off the rails. In the Terminator movie, for example, Skynet goes wrong because it concludes that the best way to fulfill its function to safeguard the world is to eliminate all humans–this is an extreme example of one type of problem, reward hacking.

Imagine that an agent discovers a buffer overflow in its reward function: it may then use this to get extremely high reward in an unintended way. From the agent’s point of view, this is not a bug, but simply how the environment works, and is thus a valid strategy like any other for achieving reward. For example, if our cleaning robot is set up to earn reward for not seeing any messes, it might simply close its eyes rather than ever cleaning anything up. Or if the robot is rewarded for cleaning messes, it may intentionally create work so it can earn more reward. More broadly, formal rewards or objective functions are an attempt to capture the designer’s informal intent, and sometimes these objective functions, or their implementation, can be “gamed” by solutions that are valid in some literal sense but don’t meet the designer’s intent. Pursuit of these “reward hacks” can lead to coherent but unanticipated behavior, and has the potential for harmful impacts in real-world systems. For example, it has been shown that genetic algorithms can often output unexpected but formally correct solutions to problems [155, 22], such as a circuit tasked to keep time which instead developed into a radio that picked up the regular RF emissions of a nearby PC.

Concrete Problems in AI Safety asks what kind of general solutions might exist to prevent or ameliorate reward hacking when we can never know all the variables that might be hacked? (The paper looks at many other issues as well.)

Competitive pressures on the battlefield and in the market mean that AI adoption will be rapid and AIs will be placed in greater and greater positions of responsibility. Firms and governments, however, have an incentive to write piecemeal solutions to AI control for each new domain but that is unlikely to be optimal. We need general solutions so that every AI benefits from the best thinking across a wide range of domains. Incentive design is hard enough when applied to humans. It will take a significant research effort combining ideas from computer science, mathematics and economics to design the right kind of incentive and learning structures for super-human AIs.

Spot the Problem

by on June 24, 2016 at 11:11 am in Uncategorized | Permalink

Google searches from the United Kingdom.

EU

Hat tip: Catherine Rampell on twitter.

In Utah v. Strieff, the Supreme Court has again weakened Fourth Amendment rights. The Sotomayor and Kagan (joined by Ginsburg) dissents are excellent and important. Sotomayor summarizes the basic issue in the case:

The Court today holds that the discovery of a warrant for an unpaid parking ticket will forgive a police officer’s violation of your Fourth Amendment rights. Do not be soothed by the opinion’s technical language: This case allows the police to stop you on the street, demand your identification, and check it for outstanding traffic warrants—even if you are doing nothing wrong. If the officer discovers a warrant for a fine you forgot to pay, courts will now excuse his illegal stop and will admit into evidence anything he happens to find by searching you after arresting you on the warrant. Because the Fourth Amendment should prohibit, not permit, such misconduct, I dissent.

If outstanding warrants were few and far between and distributed more or less randomly the case would have been wrongly decided but of little practical importance. Outstanding warrants, however, are common and much more common in some communities than others. As I wrote in 2014, in Ferguson, MO a majority of the population had outstanding warrants and not because of high crime:

You don’t get $321 in fines and fees and 3 warrants per household from an about-average crime rate. You get numbers like this from bullshit arrests for jaywalking and constant “low level harassment involving traffic stops, court appearances, high fines, and the threat of jail for failure to pay.”

Sotomayor and Kagan understand all this and the incentives the case now creates for bad policing. Here’s Kagan (who cites some of my work):

…far from a Barney Fife-type mishap, Fackrell’s seizure of Strieff was a calculated decision…As Fackrell testified, checking for outstanding warrants during a stop is the “normal” practice of South Salt Lake City police….And find them they will, given the staggering number of such warrants on the books.

…The majority’s misapplication of Brown’s three-part inquiry creates unfortunate incentives for the police— indeed, practically invites them to do what Fackrell did here….Now the officer knows that the stop may well yield admissible evidence: So long as the target is one of the many millions of people in this country with an outstanding arrest warrant, anything the officer finds in a search is fair game for use in a criminal prosecution. The officer’s incentive to violate the Constitution thus increases: From here on, he sees potential advantage in stopping individuals without reasonable suspicion—exactly the temptation the exclusionary rule is supposed to remove.

Sotomayor is at her most scathing in explaining the indignity and serious consequences of an arrest even without a conviction (citations removed for clarity):

The indignity of the stop is not limited to an officer telling you that you look like a criminal. The officer may next ask for your “consent” to inspect your bag or purse without telling you that you can decline. Regardless of your answer, he may order you to stand “helpless, perhaps facing a wall with [your] hands raised.” If the officer thinks you might be dangerous, he may then “frisk” you for weapons. This involves more than just a pat down. As onlookers pass by, the officer may “‘feel with sensitive fingers every portion of [your] body. A thorough search [may] be made of [your] arms and armpits, waistline and back, the groin and area about the testicles, and entire surface of the legs down to the feet.’”

The officer’s control over you does not end with the stop. If the officer chooses, he may handcuff you and take you to jail for doing nothing more than speeding, jaywalking, or “driving [your] pickup truck…with [your] 3-year-old son and 5-year-old daughter…without [your] seatbelt fastened.” At the jail, he can fingerprint you, swab DNA from the inside of your mouth, and force you to “shower with a delousing agent” while you “lift [your] tongue, hold out [your] arms, turn around, and lift [your] genitals.” Even if you are innocent, you will now join the 65 million Americans with an arrest record and experience the “civil death” of discrimination by employers, landlords, and whoever else conducts a background check. And, of course, if you fail to pay bail or appear for court, a judge will issue a warrant to render you “arrestable on sight” in the future.

…[all of this, AT] implies that you are not a citizen of a democracy but the subject of a carceral state, just waiting to be cataloged.

Newspaper headlines trumpeted that the middle class is shrinking but to a large extent that is because people are moving into the upper middle class not because they are getting poorer. By one measure, the middle class has shrunk from 38% of the US population in 1980 to 32% today but at the same time the upper middle class has grown from 12% to 30% of the population today.

Josh Zumbrun at the WSJ has an excellent piece on new research from the (liberal-leaning) Urban Institute and elsewhere:

upper middle

There is no standard definition of the upper middle class. Many researchers have defined the group as households or families with incomes in the top 20%, excluding the top 1% or 2%. Mr. Rose, by contrast, uses a more dynamic method similar to how researchers calculate the poverty rate, which allows for growth or shrinkage over time, and adjusts for family size.

Using Census Bureau data available through 2014, he defines the upper middle class as any household earning $100,000 to $350,000 for a family of three: at least double the U.S. median household income and about five times the poverty level. At the same time, they are quite distinct from the richest households. Instead of inheritors of dynastic wealth or the chief executives of large companies, they are likely middle-managers or professionals in business, law or medicine with bachelors and especially advanced degrees.

Smaller households can earn somewhat less to be classified as upper middle-class; larger households need to earn somewhat more.

Mr. Rose adjusts these thresholds for inflation back to 1979 and finds the population earning this much money has never been so large. One could quibble with his exact thresholds or with the adjustment that he uses for inflation. But using different measures of inflation, or using higher income thresholds for the upper-middle class, produces the same result: substantial growth among this group since the 1970s.