Alex Tabarrok

Prostitution Reduces Rape

by on October 31, 2017 at 7:34 am in Economics, Law | Permalink

A new paper in the American Economic Journal: Economic Policy by Bisschop, Kastoryano, and van der Klaauw looks at the opening and closing of prostitution zones (tippelzones) in 25 Dutch cities.

Our empirical results show that opening a tippelzone reduces sexual abuse and
rape. These results are mainly driven by a 30–40 percent reduction in the first two
years after opening the tippelzone.
For tippelzones with a licensing system, we
additionally find long-term decreases in sexual assaults and a 25 percent decrease
in drug-related crime, which persists in the medium to long run.

Cunningham and Shah studied decriminalization of indoor prostitution in Rhode Island and found very similar results.

We exploit the fact that a Rhode Island District Court
judge unexpectedly decriminalized indoor prostitution in 2003 to provide the first causal estimates
of the impact of decriminalization on the composition of the sex market, rape offenses, and sexually
transmitted infection outcomes. Not surprisingly, we find that decriminalization increased the size
of the indoor market. However, we also find that decriminalization caused both forcible rape offenses
and gonorrhea incidence to decline for the overall population. Our synthetic control model finds 824
fewer reported rape offenses (31 percent decrease) and 1,035 fewer cases of female gonorrhea (39
percent decrease) from 2004 to 2009.

In addition a working paper by Riccardo Ciacci and María Micaela Sviatschi studies prostitution in New York and also finds that prostitution significantly reduces sex crimes such as rape:

We use a unique data set to study the effect of indoor prostitution establishments on sex
crimes. We built a daily panel from January 1, 2004 to June 30, 2012 with the exact location of
police stops for sex crimes and the day of opening and location of indoor prostitution establishments.
We find that indoor prostitution decreases sex crime with no effect on other types
of crime. We argue that the reduction is mostly driven by potential sex offenders that become
customers of indoor prostitution establishments. We also rule out other mechanisms such as
an increase in the number of police officers and a reduction of potential victims in areas where
these businesses opened. In addition, results are robust to different data sources and measures
of sex crimes apart from police stops.

It’s become common to think that rape is about power and not about sex. No doubt. But some of it is about sex. Quoting Ciacci and Sviatschi again:

We find evidence consistent with the fact that potential perpetrators substitute
towards indoor prostitution establishments instead of engaging in sex crimes….This mechanism is in line with a survey of men who had purchased sex from women in London.
About 54% of these men stated that if prostitution did not exist then they would be more
likely to rape women who were not prostitutes. This belief was clearly held by one man who even
stated: “Sometimes you might rape someone: you can go to a prostitute instead” (Farley et al.,
2009).

In short, a wide variety of evidence from different authors, times and places, and experiments shows clearly and credibly that prostitution reduces rape. This finding is of great importance in considering how prostitution should be rationally regulated.

The Great Moderation and Leverage

by on October 30, 2017 at 7:23 am in Economics | Permalink

In response to my earlier post, The Great Moderation Never Ended,  the perceptive Kevin Drum noted that the moderation seems to have been asymmetric–the booms have moderated more than the busts. That’s correct but it’s more than lower economic growth–expansions also last longer. It’s as if the booms have been smoothed over a longer period of time but not the busts.

Søren Hove Ravn points me to a paper of his with co-authors, Leverage and Deepening: Business Cycle Skewness that documents this fact and also proposes a theory.

The authors argue that financial innovation made credit more easily accessible and easier credit led to more leverage. Leverage, however, has an asymmetric feature. When asset prices are up everything is golden, wealth is high and credit is easy because lenders are happy to lend to the rich. When asset prices decline, however, the economy takes a double hit, wealth is low and credit is tight. The net result is that booms are smoothed but busts become, if anything, even more violent.

The theory is promising because it explains both the negative skewness and the great moderation. It’s also important because higher leverage, longer expansions and greater negative skew are new features of business cycles that appear across many developed economies as shown by Jorda, Schularick and Taylor in Macrofinancial History and the New Business Cycle Facts. In this paper Jorda et al. create new data series using over 150 years of data from 17 economies and conclude:

…leverage is associated with dampened business cycle volatility, but more spectacular crashes.

and more generally:

We find that rates of growth, volatility, skewness, and
tail events all seem to depend on the ratio of private credit to income. Moreover, key
correlations and international cross-correlations appear to also depend quite importantly
on this leverage measure. Business cycle properties have changed with the financialization
of economies, especially in the postwar upswing of the financial hockey stick. The manner
in which macroeconomic aggregates correlate with each other has evolved as leverage
has risen. Credit plays a critical role in understanding aggregate economic dynamics.

Rodents of Unusual Size

by on October 29, 2017 at 7:31 am in Film | Permalink

The first love of the very talented team at Tilapia Films that produces many of our videos at MRUniversity isn’t economics (I know, hard to believe) but making documentaries. Their latest, Rodents of Unusual Size, has a world premier Nov. 15 in New York at DocNYC. It’s bound to be great, check it out! Tickets here.

Image may contain: text

Lights On, Lights Off

by on October 29, 2017 at 7:21 am in Economics, Science | Permalink

You can learn a lot from satellite pictures of the earth at night; the famous picture of North and South Korea, which Tyler and I feature in Modern Principles, is just one such example.

ESRI has an interesting picture-story illustrating the lights that have turned on and those that have turned off between 2012 and 2016. It’s remarkable how much North India literally turns on in this short space of time. Lights have also turned off around the globe. Not only in places like Syria but also in much of the United States and Northern Europe. In the latter two cases, as the surprising result of more efficient lighting and campaigns to reduce light pollution. Check it out.

After decades of researching the impact that humans are having on animal and plant species around the world, Chris Thomas has a simple message: Cheer up. Yes, we’ve wiped out woolly mammoths and ground sloths, and are finishing off black rhinos and Siberian tigers, but the doom is not all gloom. Myriad species, thanks in large part to humans who inadvertently transport them around the world, have blossomed in new regions, mated with like species and formed new hybrids that have themselves gone forth and prospered. We’re talking mammals, birds, trees, insects, microbes—all your flora and fauna. “Virtually all countries and islands in the world have experienced substantial increases in the numbers of species that can be found in and on them,” writes Thomas in his new book, Inheritors of the Earth: How Nature Is Thriving in an Age of Extinction.

That is the introduction to a very interesting interview with Chris Thomas, a conservation biologist. Read the whole thing.

Hat tip: The Browser.

The Great Moderation Never Ended

by on October 26, 2017 at 7:24 am in Economics | Permalink

In 2002, Stock and Watson pointed out that the volatility in the growth rate of real GDP between 1984 and 2001 was much lower than it had been between 1960 and 1983, a phenomena dubbed the great moderation. In 2004, Ben Bernanke credited better monetary policy for the great moderation–meaning, of course, better monetary policy under Volcker, Greenspan and (Governor and later Chair) Bernanke. Other people credited sectoral changes in the structure of the economy such as the shift away from volatile sectors like manufacturing to less volatile sectors like health care. Improved information technology that led to better inventory control and smoother adjustments is another explanation. Some even argued that the great moderation was due to financial innovation! Yet others said it was just dumb luck. The dumb luck view got a boost with the great recession in 2008. Subsequently, many people mocked the idea of the great moderation and those who had tried to take credit.

Yet it is now clear that the great recession interrupted but did not end the great moderation. Since the great recession ended, growth in real GDP has been much less volatile than in the 1950s to 1980s. Indeed, volatility has been lower even taking into account the great recession. In the graph, for example, I simply bound the peaks and valleys. More sophisticated measures show the same thing.

Of the possible explanations, we can now rule out luck. The economy isn’t confronted with fewer shocks than in the past but rather we are adjusting to shocks more successfully. Overall, however, I’d say the causes of the great moderation are still up for grabs.

One problem with most of the theories is that they predict more moderation over time. Manufacturing has continued to decline as a share of GDP, for example. Information technology has gotten much better since 1984. Financial innovation has, if anything, increased in scope. If you squint maybe the great moderation has gotten a bit more moderate over time but that isn’t clear.

Better monetary policy does fit the data in the sense that it could have been a one-time learning (thank you Milton Friedman). But I find it difficult to believe that policy makers are so much wiser than in the past or even that monetary policy has that much influence over the real economy. We also have to grapple with the fact that many countries have experienced a great moderation. In my view this pushes towards an explanation on the real side of the economy, albeit that is a judgment call that the world is more similar in real factors like technology than it is in policies.

We can rescue some of the real theories with limits or endogenous offsets. Inventory control, for example, can only get so good and not better. I find this plausible but it would be more convincing if we could pinpoint the key innovation. Some financial innovations might reduce volatility but ala Minsky also cause people to take more risks (an endogenous offset). That too has some plausibility but again it would be better if we could pinpoint which were the volatility decreasing innovations and which the volatility increasing innovations.

It’s striking that the great moderation never ended and we still have no solid explanation for why it happened.

Tyler and I are thrilled to announce the next great course at MRUniversity, Understanding Data. Understanding Data is taught by our well-known and accomplished colleague Thomas Stratmann and is our most ambitious course to date.

In addition to lectures, Understanding Data features a fantastic interactive data tool, a built-in version of DataSplash, which was designed by Thomas Stratmann and Lorens Helmchen specifically to run small regressions and to teach econometrics. Students can pause the video, run regressions, make predictions, compute correlations look at summary statistics and more–all on the same page! Furthermore, as the video progresses students are asked to answer questions and they receive immediate feedback on their answers.

A lot of work went into producing this course. Not only from Thomas and Lorens but also from our superb team at MRU led by Roman Hardgrave. This is a quantum leap for MRU and what you are seeing is version 1. If you notice some bugs or things that can be improved do email Support@mruniversity.com. Thanks.

The first lecture featuring the interactive data tool is Interpreting the Regression Line.

David Evans on the economics of attention:

In 2016, 437 billion hours, worth $7.1 trillion dollars, were exchanged in the
attention market in the US based on conservative estimates reported above. Attention
platforms paid for that time with content and then sold advertisers access to portions of that
time. As a result, advertisers were able to deliver messages to consumers that those
consumers would probably not have accepted in the absence of the barter of content for
their time. Consumers often don’t like getting these messages. But by agreeing to receive
them they make markets more competitive.


The economics of attention markets focuses on three features. First it focuses on
time as the key dimension of competition since it is what is being bought and sold. Second,
it focuses on content since it plays a central role in acquiring time, embedding advertising
messages, and operating efficient attention platforms. And third it focuses on the scarcity of
time and the implications of that for competition among attention platforms.

The $7.1 trillion estimate for the value of content seems too high. The high value comes from Evans assuming that the marginal wage is higher than the average so the average wage which he uses to calculate the value of time is, if anything, an underestimate while for most people I think the marginal wage is lower than the average (many people don’t even have jobs) so the average is an over-estimate. Brynjolfsson and Oh, however, using somewhat different methods estimate the consumer surplus from television as 10% of GDP and from the internet of 6% GDP or combined about $3 trillion at current levels. Either way the attention economy is very large and understudied relative to its importance.

Replication is critical for scientific progress and integrity but incentives for replication have been low. It’s good news, therefore, that a new journal will be devoted solely to replication research:

The International Journal for Re-Views in Empirical Economics (IREE) is the first journal dedicated to the publication of replication studies based on economic micro-data. Furthermore, IREE publishes synthesizing reviews, micro-data sets and descriptions thereof, and articles dealing with replication methods and the development of standards for replications.

As yet, authors of replication studies, data sets and descriptions had a hard time gaining recognition for their work by citable publications and incentives for conducting these important kinds of work were immensely reduced….IREE provides the platform to authors to be given credit for serious empirical research in economics.

The publication of replication studies often depends on their result….replications usually need to reject the original study to get published whereas a scientific impact is denied for replications confirming original findings. This induces a severe publication bias….Therefore, IREE publishes research independent of the result of the study. The selection of published articles is based on technical and formal criteria but not with regards to the qualitative and quantitative results.

Deaton, Wooldridge, and Easterlin are all involved.

Hat tip: David Roodman on twitter.

Addendum: Also check out the the inaugural Empirical Legal Studies Replication Conference which will publish papers, independent of result, in an edition of the International Review of Law and Economics.

The great Tim Urban of Wait but Why has a deep dive into Why Cryonics Makes Sense.

A key argument:

Here’s an interesting way to think about it: Imagine a patient arriving in an ambulance to Hospital A, a typical modern hospital. The patient’s heart stopped 15 minutes before the EMTs arrived and he is immediately pronounced dead at the hospital. What if, though, the doctors at Hospital A learned that Hospital B across the street had developed a radical new technology that could revive a patient anytime within 60 minutes after cardiac arrest with no long-term damage? What would the people at Hospital A do?

Of course, they would rush the patient across the street to Hospital B to save him. If Hospital B did save the patient, then by definition the patient wouldn’t actually have been dead in Hospital A, just pronounced deadbecause Hospital A viewed him as entirely and without exception doomed.

What cryonicists suggest is that in many cases where today a patient is pronounced dead, they’re not dead but rather doomed, and that there is a Hospital B that can save the day—but instead of being in a different place, it’s in a different time. It’s in the future.

Kurzgesagt and CGP Grey also have a new two part video series on why we should stop aging forever. The first one is below. The second is here.

Am I seeing a trend? I hope so. To quote CGP Grey:

Humans must discard the learned helplessness that the reaper and their own brains have imposed on them.

Is Piketty’s Data Reliable?

by on October 18, 2017 at 7:25 am in Books, Economics, History | Permalink

When Thomas Piketty’s Capital in the Twenty-First Century first appeared many economists demurred on the theory but heaped praise on the empirical work. “Even if none of Piketty’s theories stands up,” Larry Summers argued, his “deeply grounded” and “painstaking empirical research” was “a Nobel Prize-worthy contribution”.

Theory is easier to evaluate than empirical work, however, and Phillip Magness and Robert Murphy were among the few authors to actually take a close look at Piketty’s data and they came to a different conclusion:

We find evidence of pervasive errors of historical fact, opaque methodological choices, and the cherry-picking of sources to construct favorable patterns from ambiguous data.

Magness and Murphy, however, could be dismissed as economic history outsiders with an ax to grind. Moreover, their paper was published in an obscure libertarian-oriented journal. (Chris Giles and Ferdinando Giugliano writing in the FT also pointed to errors but they could be dismissed as journalists.) The Magness and Murphy conclusions, however, have now been verified (and then some) by a respected figure in economic history, Richard Sutch.

I have never read an abstract quite like the one to Sutch’s paper, The One-Percent across Two Centuries: A Replication of Thomas Piketty’s Data on the Distribution of Wealth for the United States (earlier wp version):

This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in
the Twenty-First Century….Here I examine Piketty’s US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable.
The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty’s data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based
on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more
solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty’s integrity.

You know it’s bad when a disclaimer like that is necessary. In the body, Sutch is even stronger. He concludes:

Very little of value can be salvaged from Piketty’s treatment of data from the nineteenth century. The user is provided with no reliable information on the antebellum trends in the wealth share and is even left uncertain about the trend for the top 10 percent during
the Gilded Age (1870–1916). This is noteworthy because Piketty spends the bulk of his attention devoted to America discussing the nineteenth-century trends (Piketty 2014: 347–50).

The heavily manipulated twentieth-century data for the top 1 percent share, the lack of empirical support for the top 10 percent share, the lack of clarity about the procedures used to harmonize and average the data, the insufficient documentation, and the spreadsheet errors are more than annoying. Together they create a misleading picture of the dynamics of wealth inequality. They obliterate the intradecade movements essential to an understanding of the impact of political and financial-market shocks on inequality. Piketty’s estimates offer no help to those who wish to understand the impact of inequality on “the way economic, social, and political actors view what is just and what is not” (Piketty 2014: 20).

One of the reasons Piketty’s book received such acclaim is that it fed into concerns about rising inequality and it’s important to note that Sutch is not claiming that inequality hasn’t risen. Indeed, in some cases, Sutch argues that it has risen more than Piketty claims. Sutch is rather a journeyman of economic history upset not about Piketty’s conclusions but about the methods Piketty used to reach those conclusions.

Vaping Saves Lives

by on October 13, 2017 at 7:25 am in Economics, Law, Medicine | Permalink

E-cigarettes are less dangerous than cigarettes but are equally effective at delivering nicotine. Levy et al. estimate that if smokers switched to e-cigarettes millions of life-years would be saved, even taking into account plausible rates of non-smokers who start to vape. (It’s worth noting that the authors are all cancer researchers, statisticians and epidemiologists concerned with reducing cancer deaths.)

A Status Quo Scenario, developed to project smoking rates and health outcomes in the absence of vaping, is compared with Substitution models, whereby cigarette use is largely replaced by vaping over a 10-year period. We test an Optimistic and a Pessimistic Scenario, differing in terms of the relative harms of e-cigarettes compared with cigarettes and the impact on overall initiation, cessation and switching. Projected mortality outcomes by age and sex under the Status Quo and E-Cigarette Substitution Scenarios are compared from 2016 to 2100 to determine public health impacts.

Compared with the Status Quo, replacement of cigarette by e-cigarette use over a 10-year period yields 6.6 million fewer premature deaths with 86.7 million fewer life years lost in the Optimistic Scenario. Under the Pessimistic Scenario, 1.6 million premature deaths are averted with 20.8 million fewer life years lost. The largest gains are among younger cohorts, with a 0.5 gain in average life expectancy projected for the age 15 years cohort in 2016.

Vaping saves lives but the FDA has in the past tried to impose severe regulations on the industry and to make vaping less pleasurable. (Aside: It’s interesting that liberals tend to favor other risk-reducing devices such as condoms in the classroom but disfavor vaping while conservatives often take the opposite sides. I don’t think either group is basing their choices on the elasticities.)

The FDA, for example, has tried to ban flavored e-cigarettes. In a new NBER paper, Buckell, Marti and Sindelar calculate that:

…a ban on flavored e-cigarettes would drive smokers to combustible cigarettes, which have been
found to be the more harmful way of getting nicotine (Goniewicz et al., 2017; Shahab et al., 2017).
In addition, such a ban reduces the appeal of e-cigarettes to those who are seeking to quit; ecigarettes
have proven useful as a cessation device for these individuals (Hartmann-Boyce et al.,
2016; Zhu et al., 2017), and we find that quitters have a preference for flavored e-cigarettes.

Fortunately, the new FDA commissioner Scott Gottlieb has signaled a more liberal attitude towards vaping. It could be the most consequential decision of his tenure.

Hat tip: The excellent Robert Wilbin from 80,000 Hours.

Today about a third of all new marriages are between couples who met online. Online dating has an interesting property–you are likely to be matched with a total stranger. Other matching methods, like meeting through friends, at church or even in a local bar are more likely to match people who are already tied in a network. Thus, the rise of online dating is likely to significantly change how people connect and are connected to one another in networks. Ortega and Hergovich consider a simple model:

We consider a Gale-Shapley marriage problem, in which agents may belong
to different races or communities. All agents from all races are randomly
located on the same unit square. Agents want to marry the person who is
closest to them, but they can only marry people who they know, i.e. to whom
they are connected. As in real life, agents are highly connected with agents
of their own race, but only poorly so with people from other races.

Using theory and random simulations they find that online dating rapidly increases interracial marriage. The result happens not simply because a person of one race might be matched online to a person of another race but also because once this first match occurs the friends of each of the matched couples are now more likely to meet and marry one another through traditional methods. The strength of weak ties is such that it doesn’t take too many weak ties to better connect formerly disparate networks.

Interracial marriage, defined to include those between between White, Black, Hispanic, Asian, American Indian or multiracial persons, has been increasing since at least the 1960s but using the graph at right the authors argue that the rate of growth increased with the introduction and popularization of online dating. Note the big increase in interracial marriage shortly after the introduction of Tinder in 2009!

(The authors convincingly argue that this not due to a composition effect.)

Since online dating increases the number of potential marriage partners it leads to marriages which are on average “closer” in preference space to those in a model without online dating. Thus, the model predicts that online dating should reduce the divorce rate and there is some evidence for this hypothesis:

Cacioppo et al. (2013) find that marriages created online were less likely to
break up and reveal a higher marital satisfaction, using a sample of 19,131
Americans who married between 2005 and 2012. They write: “Meeting a
spouse on-line is on average associated with slightly higher marital satisfaction
and lower rates of marital break-up than meeting a spouse through
traditional (off-line) venues”

The model also applies to many other potential networks.

Hat tip: MIT Technology Review.

Richard Thaler wins the Nobel for behavioral economics! An excellent choice and one that makes my life easier because you probably already know his work. Indeed his work may already have influenced how much you save for retirement, how you pay your taxes and whether you will donate a kidney or not. In Britain, Thaler’s work was one of the inspirations for the Behavioral Insights Team which applies behavioral economics to public policy. Since being established in 2010 similar teams have been created around the world including in the United States.

Thaler’s intellectual biography Misbehaving (available free as kindle for Amazon Prime members is this a nudge?) is a fun guide to his work. Thaler will be the first to tell you he isn’t that smart. Relative to other Nobel prize winners that might even be true. None of his papers are technically difficult or excessively math heavy and most of his ideas are pretty obvious–obvious once you have heard them! Thaler cannot have been the first person in the world to notice that people like cashews but also like it when you take the cashews away to prevent them from eating more than they really want to eat (preferences Thaler noted at a dinner party of economists). But other people, especially economists, dismissed the evidence in front of their noses that people weren’t as rational as their theories suggested–People will be more careful with big decisions. Errors will cancel. Markets will take care of that–There were plenty of reasons to go back to pondering the beautiful austerity of theory. Thaler, however, especially after reading Kahneman and Tversky’s Judgment under Uncertainty: Heuristics and Biases realized that their could be a theory of misbehaving, a theory of irrational choice.

That theory is now called behavioral economics. It’s not as clean and straight as neo-classical theory. We still don’t know when one bias, of the many that have been documented, applies and when another applies. So much depends on context and what we bring to it that perhaps we never will. Nevertheless, there is no longer any question that some features of choice and the economy are better explained via systematic biases than by purely rational decision making.

In addition to Misbehaving and Nudge (the latter with Cass Sunstein who brought these ideas to law and government) you can find many of Thaler’s key ideas in the Anomalies column of the Journal of Economic Perspectives. Probably this is the first economics Nobel to be given for a popular column! In many ways, however, these columns made Thaler’s reputation. The anomalies column was always a highlight of the issue and I remember discussing and debating these columns with Tyler and many others as they appeared. The same was true throughout the economics profession. Even economists like an anomaly.

One of the most important applications of behavioral economics has been to savings. Savings decisions are difficult because it’s not obvious how much to save or even how to save (bank accounts, mutual funds, Roth IRA, 401k etc. etc.). In addition, the decision can be administratively complex with annoying paperwork, and the benefits of good decision making don’t occur until decades into the future. Perhaps most importantly, we don’t receive clear and quick feedback about our choices. We don’t know whether we have saved too little or too much until it’s too late to change our decision. As a result, many of us fall back on defaults. These are the motivating ideas behind Thaler’s recommendations to set default rules such that people are automatically enrolled in pension plans that invest in low-cost market indices. Such default rules have changed saving behavior in the United States and around the world. Thaler’s Save More Tomorrow plans also ask people whether they want to plan today to save more of their raises, a simple yet profound change in default that makes it easier to save by lowering the perceived cost.

Thaler’s research is even changing football. His paper with Cade Massey, Overconfidence vs. Market Efficiency in the National Football League looked at “right to choose decisions” in the player draft. On the one hand, millions of dollars are made and lost on these decisions and they are being made repeatedly by professionals; thus, the case for rational decisions would seem to be strong. But on other hand, people are overconfident, they tend to make extreme forecasts, there is a winner’s curse, there is a false consensus effect (you think that everyone likes what you like), and there is present bias. These biases all suggest that decisions might be made poorly, even given the big stakes. Massey and Thaler find that it’s the latter.

Using archival data on draft-day trades, player performance and compensation, we compare the market value of draft picks with the historical value of drafted players. We find that top draft picks are overvalued in a manner that is inconsistent with rational expectations and efficient markets and consistent with psychological research.

Moreover, and this is the kicker, Massey and Thaler’s research has passed the market test! Bill Belichick started to pay attention first (econ undergrad natch) and now other smart teams are applying Thaler’s research to improve their choices.

Few economists have had more practical influence than Richard Thaler and behavioral economics is still on the upswing.

Sentence of the Day

by on October 4, 2017 at 3:35 pm in Economics | Permalink

…we show that the industries where concentration has risen the most are also those where there has been the fastest growth in productivity and innovation.

From Van Reenen and Patterson at HBR based on The Fall of the Labor Share and the Rise of Superstar Firms.