Results for “licensing”
122 found

The new issue of Econ Journal Watch

Too little, too late on the excess burdens of taxation: Cecil Bohanon, John Horowitz, and James McClure show that public finance textbooks do a very poor job of illuminating the excess burdens of taxation and incorporating such burdens into the analysis of the costs of government spending.

Does occupational licensing deserve our seal of approval? Uwe Reinhardt reviews Morris Kleiner’s work on occupational regulation.

Clashing Northmen: In a previous issue, Arild Sæther and Ib Eriksen interpreted the postwar economic performance of Norway and the role of economists there. Here Olav Bjerkholt strongly objects to their interpretation, and Sæther and Eriksen reply.

Pull over for inspection: Dragan Ilić explores replicability and interpretation problems of a recent American Economic Review article by Shamena Anwar and Hanming Fang on racial prejudice and motor vehicle searches.

Capitalism and the Rule of Love: We reproduce a profound and rich—yet utterly neglected—essay by Clarence Philbrook, first published in 1953.

EJW Audio

The link to the issue is here.

America’s economic problems predated the Great Recession

Here is a very good piece by Binyamin Appelbaum, focusing on the research of Davis and Haltiwanger, here is one excerpt:

Employment losses during the Great Recession may have had more to do with factors like the rise of Walmart than with the recession itself, two economists say in a new academic paper.

The paper, presented Friday morning at the annual gathering of economists and central bankers at Jackson Hole, Wyo., argues that the share of Americans with jobs has declined because the labor market has stagnated in recent decades — fewer people losing or leaving jobs, fewer people landing new ones. This dearth of creative destruction, the authors argue, is the result of long-term trends including a slowdown in small business creation and the rise of occupational licensing.

“These results,” wrote the economists Stephen J. Davis, of the University of Chicago, and John Haltiwanger, of the University of Maryland, “suggest the U.S. economy faced serious impediments to high employment rates well before the Great Recession, and that sustained high employment is unlikely to return without restoring labor market fluidity.”

Their findings contribute to the growing genre of papers that purport to show that the weakness of the American economy is caused largely by problems that predate the recession — and that the Federal Reserve can’t remedy them with low interest rates.

Read the whole thing.

The Paul Ryan anti-poverty plan

The Ryan plan is here (pdf), an NYT summary is here. Overall it’s pretty good.  It attacks excess incarceration and occupational licensing and regressive regulations, three issues where a serious dialogue is badly needed.  It makes a good attempt to limit the incentives for lower-income people not to work.  It’s better than what the Left is turning out for the first time in…how long?

I’m not crazy about the complicated plan to monitor the lives of the poor in more detail (“…work with families to design a customized life plan to provide a structured roadmap out of poverty.”)  And my biggest conceptual objection is the heavy stress on block grants and letting the states figure things out.  I’m not opposed to that in principle, and I might even favor it, but I think it’s often the lazy man’s way of avoiding talk about difficult trade-offs.  I’d like to see a possible plan for just a single state, or better yet two or three, that is supposed to represent an improvement.  That shouldn’t be too hard to do, or if it is maybe the states can’t do it either.  It’s not as if fifty states are giving us a market-based discovery process, as the rhetoric sometimes implies.  Furthermore we have a bunch of large states with ongoing bad governance, such as CA, NY, and IL, and maybe the federal government really can do better for those places.

Here is Vox on the regulation side of the plan.  Kevin Drum offers comment.  Ross Douthat mostly likes it.  Jared Bernstein doesn’t like it.  Robert Greenstein is critical.  Here is Neil Irwin.  And Annie Lowrey.  And Josh Barro.  And Yuval Levin.  And Ezra Klein.  Other people have opinions about it, too.  Or so I am led to believe.

Did vouchers cause the decline in Swedish schools?

It seems not.  Responding to an earlier piece by Ray Fisman, Tino Sanandaji writes:

Fisman doesn’t cite any plausible mechanism through which private schools could have dragged down the test scores of the 86 percent of Swedish pupils who attend public schools. The explanation cannot be that private schools have drained public schools of resources, as private schools on average get 4 percent less funding than public schools.

One-third of Sweden’s municipalities still have no private schools. Social Democratic strongholds in northern Sweden in particular were less enthusiastic about licensing such institutions, and if private schools were causing the Swedish school crisis, we would expect municipalities with no privatization to outperform the rest of the country. Two studies by Böhlmark och Lindahl suggest that school results, if anything, fell more in regions with no private schools.

He also argues:

…in my view, the main culprit was the experiment with radically new pedagogical methods. The Swedish school system used to rely on traditional teaching methods. In recent decades, modern “individualist” or “progressive” pedagogic ideas took hold. The idea is that pupils should not be forced to learn using external incentives such as grades, and children should take responsibility for their own learning, driven by internal motivation. Rote memorization and repetition are viewed as old-fashioned relics. Teacher-led lectures have increasingly been replaced by group work and “research projects.”

And this:

The private Swedish schools are not really allowed to innovate where it matters, with their pedagogic methods. The curriculum and rules in the classroom are determined by the state, which also trains teachers in the so called “modern” pedagogic theories. “Swedish schools have comparatively low levels of autonomy over curricula and assessments,” PISA notes.

In practice, what private Swedish schools have control over is management and cost control, and this is where they have directed their efforts. But since the public Swedish schools were pretty well managed to start with, productivity gains from privatization were limited.

For the pointer I thank Daniel B. Klein and Niclas Berggren.

Just a bit more D.C. freedom has arrived on our shores…

Getting paid to tell tourists about D.C. history will no longer involve passing a 100-question test or paying a fee. Anyone can just show up and talk without fear of arrest.

On Friday, the U.S. Court of Appeals for the D.C. Circuit threw out a 108-year-old city code requiring every “sightseeing tour guide” in the city to be licensed after correctly answering 70 out of 100 multiple-choice questions.

The decision is a victory for Tonia Edwards and Bill Main, the married couple who together operate the (illegal, until Friday) Segway tour company Segs in the City and have battled the regulations since 2010.

The $200 cost of the licensing process was too high for many of their guides, they said, often recent college graduates who work in the business for only a few months.

There is more here.  “Small steps toward a much better world,” as they say…

Assorted links

1. Do imaginary companions die?

2. Albert Hirschman’s Hiding Hand.

3. Should adults read Young Adult fiction?

4. Edward Conard on whether rescuing subprime borrowers would have fixed the economy.

5. Why hasn’t the price of oil gone up more?  And should you in fact fear Friday the 13th?

6. Do dogs prefer to avoid being part of the 47%? (speculative, to say the least)

7. How about state licensing for those who write code?

8. Krugman post on wrongness.

How the job of policeman is changing

Here is one bit from an interesting article:

Minnesota has long led the nation in peace officer standards; it’s the only state to require a two-year degree and licensing. Now, a four-year degree is becoming a more common standard for entry into departments including Columbia Heights, Edina and Burnsville. Many other departments require a four-year degree for promotion.

It’s not a rapid-fire change, but rather an evolution sped up by high unemployment that deepened the candidate pool and gave chiefs more choices. Officer pay and benefits can attract four-year candidates. Edina pays top-level officers $80,000; Columbia Heights pays nearly $75,000.

Nowadays, officers are expected to juggle a variety of tasks and that takes more education, chiefs said. Officers communicate with the public, solve problems, navigate different cultures, use computers, radios and other technology while on the move, and make split-second decisions about use of force with a variety of high-tech tools on their belt. And many of those decisions are recorded by squad car dashboard cameras, officer body cameras and even bystanders with smartphones.

There is more here, and for the pointer I thank Kirk Jeffrey.

A New FDA for the Age of Personalized, Molecular Medicine

In a brilliant new paper (pdf) (html) Peter Huber draws upon molecular biology, network analysis and Bayesian statistics to make some very important recommendations about FDA policy. Consider the following drugs (my list):

Drug A helps half of those to whom it is prescribed but it causes very serious liver damage in the other half. Drug B works well at some times but when administered at other times it accelerates the disease. Drug C fails to show any effect when tested against a placebo but it does seem to work in practice when administered as part of a treatment regime.

Which of these drugs should be approved and which rejected? The answer is that all of them should be approved; that is, all of them should be approved if we can target each drug to the right patient at the right time and with the right combination of other drugs. Huber argues that Bayesian adaptive testing, with molecular biology and network analysis providing priors, can determine which patients should get which drugs when and in what combinations. But we can only develop the data to target drugs if the drugs are actually approved and available in the field. The current FDA testing regime, however, is not built for adaptive testing in the field.

The current regime was built during a time of pervasive ignorance when the best we could do was throw a drug and a placebo against a randomized population and then count noses. Randomized controlled trials are critical, of course, but in a world of limited resources they fail when confronted by the curse of dimensionality. Patients are heterogeneous  and so are diseases. Each patient is a unique, dynamic system and at the molecular level diseases are heterogeneous even when symptoms are not. In just the last few years we have expanded breast cancer into first four and now ten different types of cancer and the subdivision is likely to continue as knowledge expands. Match heterogeneous patients against heterogeneous diseases and the result is a high dimension system that cannot be well navigated with expensive, randomized controlled trials. As a result, the FDA ends up throwing out many drugs that could do good:

Given what we now know about the biochemical complexity and diversity of the environments in which drugs operate, the unresolved question at the end of many failed clinical trials is whether it was the drug that failed or the FDA-approved script. It’s all too easy for a bad script to make a good drug look awful. The disease, as clinically defined, is, in fact, a cluster of many distinct diseases: a coalition of nine biochemical minorities, each with a slightly different form of the disease, vetoes the drug that would help the tenth. Or a biochemical majority vetoes the drug that would help a minority. Or the good drug or cocktail fails because the disease’s biochemistry changes quickly but at different rates in different patients, and to remain effective, treatments have to be changed in tandem; but the clinical trial is set to continue for some fixed period that doesn’t align with the dynamics of the disease in enough patients

Or side effects in a biochemical minority veto a drug or cocktail that works well for the majority. Some cocktail cures that we need may well be composed of drugs that can’t deliver any useful clinical effects until combined in complex ways. Getting that kind of medicine through today’s FDA would be, for all practical purposes, impossible.

The alternative to the FDA process is large collections of data on patient biomarkers, diseases and symptoms all evaluated on the fly by Bayesian engines that improve over time as more data is gathered. The problem is that the FDA is still locked in an old mindset when it refuses to permit any drugs that are not “safe and effective” despite the fact that these terms can only be defined for a large population by doing violence to heterogeneity. Safe and effective, moreover, makes sense only when physicians are assumed to be following simple, A to B, drug to disease, prescribing rules and not when they are targeting treatments based on deep, contextual knowledge that is continually evolving:

In a world with molecular medicine and mass heterogeneity the FDA’s role will change from the yes-no single rule that fits no one to being a certifier of biochemical pathways:

By allowing broader use of the drug by unblinded doctors, accelerated approval based on molecular or modest—and perhaps only temporary—clinical benefits launches the process that allows more doctors to work out the rest of the biomarker science and spurs the development of additional drugs. The FDA’s focus shifts from licensing drugs, one by one, to regulating a process that develops the integrated drug-patient science to arrive at complex, often multidrug, prescription protocols that can beat biochemically complex diseases.

…As others take charge of judging when it is in a patient’s best interest to start tinkering with his own molecular chemistry, the FDA will be left with a narrower task—one much more firmly grounded in solid science. So far as efficacy is concerned, the FDA will verify the drug’s ability to perform a specific biochemical task in various precisely defined molecular environments. It will evaluate drugs not as cures but as potential tools to be picked off the shelf and used carefully but flexibly, down at the molecular level, where the surgeon’s scalpels and sutures can’t reach.

In an important section, Huber notes that some of the biggest successes of the drug system in recent years occurred precisely because the standard FDA system was implicitly bypassed by orphan drug approval, accelerated approval and off-label prescribing (see also The Anomaly of Off-Label Prescribing).

But for these three major licensing loopholes, millions of people alive today would have died in the 1990s. Almost all the early HIV- and AIDS-related drugs—thalidomide among them—were designated as orphans. Most were rushed through the FDA under the accelerated-approval rule. Many were widely prescribed off-label. Oncology is the other field in which the orphanage, accelerated approval, and off-label prescription have already played a large role. Between 1992 and 2010, the rule accelerated patient access to 35 cancer drugs used in 47 new treatments. For the 26 that had completed conventional followup trials by the end of that period, the median acceleration time was almost four years.

Together, HIV and some cancers have also gone on to demonstrate what must replace the binary, yes/ no licensing calls and the preposterously out-of-date Washington-approved label in the realm of complex molecular medicine.

Huber’s paper has a foreword by Andrew C. von Eschenbach, former commissioner of the FDA, who concludes:

For precision medicine to flourish, Congress must explicitly empower the agency to embrace new tools, delegate other authorities to the NIH and/or patient-led organizations, and create a legal framework that protects companies from lawsuits to encourage the intensive data mining that will be required to evaluate medicines effectively in the postmarket setting. Last but not least, Congress will also have to create a mechanism for holding the agency accountable for producing the desired outcomes.

Judge Trims Patent Thicket

In Launching the Innovation Renaissance I wrote:

In the software, semiconductor and biotech sectors, for example, a new product can require the use of hundreds or even thousands of previous patents, giving each patent owner veto-power over innovation. Most of the owners don’t want to actually stop innovation of course, they want to use their veto-power to grab a share of the profits. So in theory patent owners could agree to a system of licenses from which everyone would benefit. In practice, however, licensing is costly, time-consuming and less likely to work the more parties are involved. It’s easy for a bargain to break down when five owners each want 25 percent of the profits. It’s almost impossible for a bargain to work when hundreds of owners each want 10 percent of the profits.

The just decided Microsoft v. Motorola decision illustrates the problem and what judges can do to help resolve the problem. The case concerned two standards-essential patents (SEPs) which must be licensed to other parties at a reasonable and non-discriminatory (RAND) rates. Motorola, however, was claiming that a reasonable fee required Microsoft to pay over $4 billion annually. The court decided, however, that a truly reasonable free was about $1.8 million a year. Quite the discount. The decision by US District Judge James Robart is admirably clear:

When the standard becomes widely used, the holder of SEPs obtain substantial leverage to demand more than the value of their specific patented technology. This is so even if there were equally good alternatives to that technology available when the original standard was adoped. After the standard is widely implemented, switching to those alternatives is either no longer viable or would be very costly….The ability of a holder of an SEP to demand more than the value of its patented technology and to attempt to capture the value of the standard itself is referred to as patent “hold-up.”…Hold-up can threaten the diffusion of valuable standards and undermine the standard-setting process.

…In the context of standards having many SEPs and products that comply with multiple standards, the risk of the use of post-adoption leverage to exact excessive royalties is compounded by the number of potential licensors and can result in cumulative royalty payments that can undermine the standards…The payment of excessive royalty to many different holders of SEPs is referred to as “royalty stacking.”…a proper methodology for determining a RAND royalty should address the risk of royalty stacking by considering the aggregate royalties that would apply if other SEP holders made royalty demands of the implementer.

Judges made patent law what it is today and they are beginning to remake it. The decision impacts not just Microsoft and Motorola but the value of future patents and the value of future patent litigation.

New issue of Econ Journal Watch

You will find it here.  The contents include:

James Tooley on Abhijit Banerjee and Esther Duflo’s Poor Economics: Banerjee and Duflo propose to bypass the “big questions” of economic development and focus instead on “small steps” to improvement. But, says Tooley, they proceed to make big judgments about education in developing countries, judgments not supported by their own evidence.

Why the Denial? Pauline Dixon asks why writers at UNESCO, Oxfam, and elsewhere have denied or discounted the success and potentiality of private schooling in developing countries.

Neither necessary nor sufficient, but… Thomas Mayer critically appraises Stephen Ziliak and Deirdre McCloskey’s influential writings, particularly The Cult of Statistical Significance. McCloskey and Ziliak reply.

Was Occupational Licensing Good for Minorities? Daniel Klein, Benjamin Powell, and Evgeny Vorotnikov take issue with a JLE article by Marc Law and Mindy Marks. Law and Marks reply.

Mankiw vs. DeLong and Krugman on the CEA’s Real GDP Forecasts in Early 2009: David Cushman shows how a careful econometrician might have adjudicated the debate among these leading economists over the likelihood of a macroeconomic rebound.

Barriers to entry cocoa bean counter edition

Becoming a cocoa beans grader is about four times harder than passing the New York State Bar exam, judging by a comparison of the two tests’ pass rates.

Candidates must correctly identify defects in beans such as mold, infestation from insects and cocoa that is “smoky or hammy”—a sign that the beans have been dried over a fire, not in the sun. Since beans easily absorb odors, the fire can give the beans a smoky flavor. In another section, they must identify the origin of various beans, most of which look identical to the layperson’s eye.

David Morales, one of the three men huddled over the beans in the ICE’s grading room, says he failed twice before he passed the exam two years ago. “I was studying for it, but not enough,” said the 37-year-old Bronx native, who noted the origin section tripped him up.

Is it a public sector monopoly, private sector monopoly, or some combination of both?  Or is it just really hard to do well, requiring individuals of truly specialized abilities and with a lot of value at stake?:

The problem is that there are only 24 certified graders for ICE and the exchange is concerned that retirement and old age will deprive it of a crucial cog in its commodity-trading machine.

In an effort to “keep the talent pure and fresh,” the exchange is offering its licensing exam in October for the first time in two years, said Valerie Colaizzo, managing director of commodities operations for the exchange.

The full article is here, hat tip goes to the excellent Daniel Lippman, note by the way that Daniel’s most recent piece is here.

Optimal policy toward prostitution

An email from Samuel Lee:

…we take the liberty of sending you a link to the working paper:

In the paper, we build on Edlund and Korn’s model of voluntary prostitution (JPE, 2002) to study the effect of prostitution laws. Our main results are as follows:

– Neither across-the-board legalization nor criminalization unambiguously reduces sex trafficking. The impact of either of these policies crucially depends on the incidence of voluntary prostitution, which in turn depends on other variables, such as the female-male income ratio.

– Even if across-the-board criminalization reduces sex trafficking, it comes at the expense of voluntary prostitutes, who prefer legalization. There is then an inherent conflict between safeguarding (the liberties of) voluntary prostitutes and preventing trafficking. (This conflict is borne out in recent public controversies about prostitution laws in Canada, France, and South Korea.)
– Between different types of criminalization, criminalizing johns is preferable to criminalizing prostitutes. The former is more effective in combating sex trafficking than the latter. The latter is furthermore unjust towards trafficked prostitutes, who’d then be doubly victimized.
Based on our analysis, we propose a different legal approach, which has so far not been tried by any country: criminalization of johns outside of a “safe harbor” for voluntary prostitution. To be more specific:
– Licensed brothels and prostitutes (i.e., regulated brothels) combined with — and this is crucial — criminal penalties for johns who purchase sex outside of these brothels. This (i) discriminates between the two “modes of production” of commercial sex, voluntary sex work and trafficking, and (ii) funnels all demand to the desirable “mode.” Trivial as the idea may sound, this policy has the potential to achieve both objectives, safeguard voluntary sex work and prevent trafficking, and hence to reconcile the two sides of the debate.
– There are implementation issues, but none that couldn’t be addressed. Point (i) requires effective background checks for the licensing procedures. It also requires monitoring of regulated brothels to ensure that only licensed prostitutes are employed there. Point (ii) requires undercover law enforcement (“fake illegal prostitutes”) to deter illegal purchases. And it requires severe penalties for illegal purchases — surveys suggest that the most effective one is public registries (shaming), which in this case is less of a moral statement about prostitution than about trafficking because buyers outside of the “safe harbor” would be aware that they very likely purchase sex from trafficking victims.
– There are clearly costs of licensing and enforcing the laws against illegal purchases. It seems to us, however, that these costs might be lower than for alternative, less effective approaches — which include the costs of enforcing across-the-board criminalization of prostitution and/or law enforcement activities targeted directly at traffickers, who are notoriously hard to catch or convict.

There are other questions regarding prostitution laws that we discuss in the paper, some of them call into question simple interpretations that are often made about empirical associations between domestic prostitution laws and the domestic incidence of trafficked prostitution. We hope that you find the paper interesting.