Category: Data Source

Prejudice and foreign policy views

Scholars of foreign policy preference formation have accepted what Rathbun et al. (2016) call the “vertical hierarchy model,” which says that policy attitudes are determined by more abstract moral ideas about right and wrong. This paper turns this idea on its head by introducing the prejudice first model, arguing that foreign policy preferences and orientations are in part driven by attitudes towards the groups being affected by specific policies. Three experiments are used to test the utility of this framework. First, when conservatives heard about Muslims killing Christians, as opposed to the opposite scenario, they were more likely to support a humanitarian intervention and agree that the United States has a moral obligation to help those persecuted by their governments. Liberals showed no religious preference. When the relevant identity group was race, however, liberals were more likely to want to help blacks persecuted by whites, while conservatives showed no racial bias. In contrast, the degree of persecution mattered relatively little to respondents in either experiment. In another experiment, conservatives adopted more isolationist policies after reading a text about the country becoming more liberal, as opposed to a paragraph that said the United States was a relatively conservative country. The treatment showed the opposite effect on liberals, although the results fell just short of statistical significance. While not necessarily contradicting the vertical hierarchy model, the results indicate that prejudices and biases not only help influence foreign policy attitudes, but moral perceptions of right and wrong in international politics.

That is from Richard Hanania and Robert Trager.  File under “Mood Affiliation…”

Why so many women in public affairs schools?

This paper presents on three new styled facts: first, schools of public affairs hire many economists; second, those economists are disproportionately female; and third, salaries in schools of public affairs are, on average, lower than salaries in mainline departments of economics. We seek to understand the linkage, if any, among these facts. We assembled a unique database of over 2,150 faculty salary profiles from the top 50 Schools of Public Affairs in the United States as well as the corresponding Economics and Political Science departments. For each faculty member we obtained salary data to analyze the relationship between scholarly discipline, department placement, gender, and annual salary compensation. We found substantial pay differences based on departmental affiliation, significant differences in citation records between male and female faculty in schools of public affairs, and no evidence that the public affairs discount could be explained by compositional differences with respect to gender, experience or scholarly citations.

That is the abstract of a new NBER working paper by Lori L. Taylor, Kalena E. Cortes, and Travis C. Hearn.  I have a vague sense that the same might be true of public policy schools as well.  Why?

Economists study busing

This paper dates from 2012, but it is one of the best looks at what we know about busing, based on rigorous analysis of data, combined with natural experiments:

We study the impact of the end of race-based busing in Charlotte-Mecklenburg schools (“CMS”) on academic achievement, educational attainment, and young adult crime. In 2001, CMS was prohibited from using race in assigning students to schools. School boundaries were redrawn dramatically to reflect the surrounding neighborhoods, and half of its students received a new assignment. Using addresses measured prior to the policy change, we compare students in the same neighborhood that lived on opposite sides of a newly drawn boundary. We find that both white and minority students score lower on high school exams when they are assigned to schools with more minority students. We also find decreases in high school graduation and four-year college attendance for whites, and large increases in crime for minority males. The impacts on achievement and attainment are smaller in younger cohorts, while the impact on crime remains large and persistent for at least nine years after the re-zoning. We show that compensatory resource allocation policies in CMS likely played an important role in mitigating the impact of segregation on achievement and attainment, but had no impact on crime. We conclude that the end of busing widened racial inequality, despite efforts by CMS to mitigate the impact of increases in segregation.

That is from Stephen B. Billings, David J. Deming, and Jonah E. Rockoff.

Should the citizenship question be put on the Census?

That is the topic of my latest Bloomberg column, and here is one excerpt:

Unlike many of those who push for the question, I would like to boost the flow of legal immigration by a factor or two or three. Nonetheless, are we supposed to let foreigners in (which I favor), and give them a rapid path to citizenship (which I also favor), but somehow we are not allowed to ask them if they are citizens? To me this boggles the mind.

The real point is that the Democratic Party has talked itself into an untenable and indeed politically losing rhetorical stance on immigration (did you watch the debates? decriminalize illegal migration? health care benefits for illegal immigrants?), and the Census battle is another example of that.  It is no surprise that Trump wishes to keep it alive as a political issue:

Do you really wish for your view to be so closely affiliated with the attitude that citizenship is a thing to hide? I would be embarrassed if my own political strategy implied that I take a firm view — backed by strong moralizing — that we not ask individuals about their citizenship on the Census form. I would think somehow I was, if only in the longer run, making a huge political blunder to so rest the fate of my party on insisting on not asking people about their citizenship.

Not asking about citizenship seems to signify an attitude toward immigrants something like this: Get them in and across the border, their status may be mixed and their existence may be furtive, and let’s not talk too openly about what is going on, and later we will try to get all of them citizenship. Given the current disagreement between the two parties on immigration questions, that may well be the only way of getting more immigrants into the U.S., which I hold to be a desirable goal. But that is a dangerous choice of political turf, and it may not help the pro-immigration cause in the longer run.

Finally:

Countries that do let in especially high percentages of legal immigrants, such as Canada and Australia, take pretty tough stances in controlling their borders. Both of those countries ask about citizenship on their censuses. When citizens feel in control of the process, they may be more generous in terms of opening the border.

If you can’t ask about citizenship on your census, as indeed Canada and Australia do, it is a sign that your broader approach to immigration is broken.  I know this is a hard one to back out of, but if your response is to attack the motives of the Republicans, or simply reiterate the technocratic value of a more accurate Census, it is a sign of not yet being “woke” on this issue.  America desperately needs more legal immigration.

California’s regulatory code for housing is too strict

The sponsors of SB 50 seem to recognize that the state’s housing problems are at least partially man-made. Indeed, California is a leader in regulating just about everything — including insurance carriers, public utilities and housing construction. If California’s regulatory code underwent some serious spring cleaning, it could help the state at least make a dent in its housing affordability crisis.

The California Code of Regulations — the compilation of the state’s administrative rules — contains more than 21 million words. If reading it was a 40-hour-a-week job, it would take more than six months to get through it, and understanding all that legalese is another matter entirely.

Included in the code are more than 395,000 restrictive terms such as “shall,” “must” and “required,” a good gauge of how many actual requirements exist. This is by far the most regulation of any state in the country, according to a new database maintained by the Mercatus Center, a research institute at George Mason University. The average state has about 137,000 restrictive terms in its code, or roughly one-third as many as California. Alaska and Montana are among the states with as few as 60,000.

That is from James Broughel and Emily Hamilton at Mercatus, in The Los Angeles Times.

The decline in American infrastructure proficiency

That is the concern of a new paper by Ray Fair, here is the abstract:

This paper examines the history of U.S. infrastructure since 1929 and in the process reports an interesting fact about the U.S. economy. Infrastructure as a percent of GDP began a steady decline around 1970, and the government budget deficit became positive and large at roughly the same time. The infrastructure pattern in other countries does not mirror that in the United States, so the United States appears to be a special case. The overall results suggest that the United States became less future oriented beginning around 1970. This change has persisted. This is the interesting fact. Whether it can be explained is doubtful.

Is it not the rise of interest in spending more money on medical care?

Allegedly Unique Events

One common response to yesterday’s post, What is the Probability of a Nuclear War?, was to claim that probability cannot be assigned to “unique” events. That’s an odd response. Do such respondents really believe that the probability of a nuclear war was not higher during the Cuban Missile Crisis than immediately afterwards when a hotline was established and the Partial Nuclear Test Ban Treaty signed?

Claiming that probability cannot be assigned to unique events seems more like an excuse to ignore best estimates than a credible epistemic position. Moreover, the claim that probability cannot be assigned to “unique” events is testable, as Phillip Tetlock points out in an excellent 80,000 Hours Podcast with Robert Wiblin.

I mean, you take that objection, which you hear repeatedly from extremely smart people that these events are unique and you can’t put probabilities on them, you take that objection and you say, “Okay, let’s take all the events that the smart people say are unique and let’s put them in a set and let’s call that set allegedly unique events. Now let’s see if people can make forecasts within that set of allegedly unique events and if they can, if they can make meaningful probability judgments of these allegedly unique events, maybe the allegedly unique events aren’t so unique after all, maybe there is some recurrence component.” And that is indeed the finding that when you take the set of allegedly unique events, hundreds of allegedly unique events, you find that the best forecasters make pretty well calibrated forecasts fairly reliably over time and don’t regress too much toward the mean.

In other words, since an allegedly unique event either happens or it doesn’t it is difficult to claim that any probability estimate was better than another but when we look at many forecasts each of an allegedly unique event what you find is that some people get more of them right than others. Moreover, the individuals who get more events right approach these questions using a set of techniques and tools that can be replicated and used to improve other forecasters. Here’s a summary from Mellers, Tetlock, Baker, Friedman and Zeckhauser:

In recent years, IARPA (the Intelligence Advanced Research Project Activity), the research wing of the U.S. Intelligence Community, has attempted to learn how to better predict the likelihoods of unique events. From 2011 to 2015, IARPA sponsored a project called ACE, comprising four massive geopolitical forecasting tournaments conducted over the span of four years. The goal of ACE was to discover the best possible ways of eliciting beliefs from crowds and optimally aggregating them. Questions ranged from pandemics and global leadership changes to international negotiations and economic shifts. An example question ,released on September 9, 2011, asked, “Who will be inaugurated as President of Russia in 2012?”…The Good Judgment Project studied over a million forecasts provided by thousands of volunteers who attached numerical probabilities to such events (Mellers, Ungar, Baron, Ramos, Gurcay, et al., 2014; Tetlock, Mellers, Rohrbaugh, & Chen, 2014).

In the ACE tournaments, IARPA defined predictive success using a metric called the Brier scoring rule (the squared deviation between forecasts and outcomes,where outcomes are 0 and 1 for the non-occurrence and occurrence of events, respectively; Brier, 1950). Consider the question, “Will Bashar al-Assad be ousted from Syria’s presidency by the end of 2016?” Outcomes were binary; Assad either stays or he is ousted. Suppose a forecaster predicts that Assad has a 60% chance of staying and a 40% chance of being ousted. If, at the end of 2016, Assad remains in power, the participant’s Brier score would be [(1-.60)^2 + (0-.40)^2] = 0.16. If Assad is ousted, the forecaster’s score is [(0 -.60)^2 + (1 -.40)^2] = 0.36. With Brier scores, lower values are better, and zero is a perfect score.

…The Good Judgment Project won the ACE tournaments by a wide margin each year by being faster than the competition at finding ways to push probabilities toward 0 for things that did not happen and toward 1 for things that did happen. Five drivers of accuracy accounted for Good Judgment’s success.They were identifying, training, teaming, and tracking good forecasters, as well as optimally aggregating predictions. (Mellers, et al., 2014; Mellers, Mellers, Stone, Atanasov, Rohrbaugh, Metz, et al., 2015a; Mellers, Stone, Murray, Minster, Rohrbaugh, et al., 2015b).

Dining out as cultural trade

By Joel Waldfogel, here is the abstract:

Perceptions of Anglo-American dominance in movie and music trade motivate restrictions on cultural trade. Yet, the market for another cultural good, food at restaurants, is roughly ten times larger than the markets for music and film. Using TripAdvisor data on restaurant cuisines, along with Euromonitor data on overall and fast food expenditure, this paper calculates implicit trade patterns in global cuisines for 52 destination countries. We obtain three major results. First, the pattern of cuisine trade resembles the “gravity” patterns in physically traded products. Second, after accounting gravity factors, the most popular cuisines are Italian, Japanese, Chinese, Indian, and American. Third, excluding fast food, the largest net exporters of their cuisines are the Italians and the Japanese, while the largest net importers are the US – with a 2017 deficit of over $130 billion – followed by Brazil, China, and the UK. With fast food included, the US deficit shrinks to $55 billion but remains the largest net importer along with China and, to a lesser extent, the UK and Brazil. Cuisine trade patterns appear to run starkly counter to the audiovisual patterns that have motivated concern about Anglo-American cultural dominance.

For the pointer I thank John Alcorn.

People like working with their friends, but it makes them less productive

From Sangyoon Park:

Through a field experiment at a seafood-processing plant, I examine how working alongside friends affects employee productivity and how this effect is heterogeneous with respect to an employee’s personality. This paper presents two main findings. First, worker productivity declines when a friend is close enough to socialize with. Second, workers who are higher on the conscientiousness scale show smaller productivity declines when working alongside a friend. Estimates suggest that a median worker is willing to pay 4.5 percent of her wage to work next to friends.

That is from American Economic Journal: Applied Economics, via Adam Ozimek.

Gender and competition

From American Economic Journal, Applied Economics:

“Do Women Give Up Competing More Easily? Evidence from the Lab and the Dutch Math Olympiad,” by Thomas Buser and Huaiping Yuan.

We use lab experiments and field data from the Dutch Math Olympiad to show that women are more likely than men to stop competing if they lose. In a math competition in the lab, women are much less likely than men to choose competition again after losing in the first round. In the Math Olympiad, girls, but not boys, who fail to make the second round are less likely to compete again one year later. This gender difference in the reaction to competition outcomes may help to explain why fewer women make it to the top in business and academia.

Here is the link to the paper.  Here are earlier, ungated versions.

From the comments, on alcohol abuse

I refer you to Prevalence of 12-Month Alcohol Use, High-Risk Drinking, and DSM-IV Alcohol Use Disorder in the United States, 2001-2002 to 2012-2013. My apologies for not being able to locate the primary data sooner.

Key summary quotes below:

Twelve-month alcohol use significantly increased from 65.4% in 2001-2002 to 72.7% in 2012-2013, a relative percentage increase of 11.2%

The prevalence of 12-month high-risk drinking increased significantly between 2001-2002 and 2012-2013 from 9.7% to 12.6% (change, 29.9%) in the total population.

The prevalence of 12-month DSM-IV AUD increased significantly from 8.5% to 12.7% (change, 49.4%) in the total population.

Twelve-month DSM-IV AUD among 12-month alcohol users significantly increased from 12.9% to 17.5% (change, 35.7%) in the total population.

At the end of the day, I am still going to trust outcomes data over survey data. People lie, autopsies don’t. What I know is that acute alcohol poisoning increased by 700% in 20 years. You die from acute alcohol poisoning not because you slowly got sick over years, but because you drank so much so quickly that your body is overwhelmed. And this is in spite of the medical profession getting better at hemodialysis to bring down acutely toxic ethanol poisoning.

What I also know is that alcohol related hepatic deaths bottomed out in 2003 and have since been rising rapidly (~50% increase). This is due to the fact that the generation socialized by prohibition had lower lifetime alcohol use and problematic alcohol use than the generations before or after. As that generation died off, or aged out, successive generations who drank more started refilling the hepatic wards. Even more fun for every age bracket, we are seeing more alcohol related hepatic death than we saw a decade ago for those same age brackets excepting only the youngest cohorts.

These are basically impossible to square with a thesis of no substantial change in drinking patterns. They fit quite nicely with formal epidemiological surveys showing more problematic drinking and a shift in alcohol consumption.

That is from “Sure,” see also his/her other comments in the longer thread.

Are quality-adjusted medical prices declining for chronic disease?

At least for diabetes care, the answer seems to be yes, according to Karen Eggleson, et.al.:

We analyze individual-level panel data on medical spending and health outcomes for 123,548 patients with type 2 diabetes in four health systems. Using a “cost-of-living” method that measures value based on improved survival, we find a positive net value of diabetes care: the value of improved survival outweighs the added costs of care in each of the four health systems. This finding is robust to accounting for selective survival, end-of-life spending, and a range of values for a life-year or, equivalently, to attributing only a fraction of survival improvements to medical care.

That is from a new NBER working paper.  One way to read this paper is to be especially optimistic about medical progress, and also the U.S. health care system and furthermore the net contribution of science and medicine to economic growth.  Another way to read this paper is to be especially pessimistic about human discipline and the ability to follow doctor’s orders.

We place too much weight on redundant information

The present work identifies a so-far overlooked bias in sequential impression formation. When the latent qualities of competitors are inferred from a cumulative sequence of observations (e.g., the sum of points collected by sports teams), impressions should be based solely on the most recent observation because all previous observations are redundant. Based on the well-documented human inability to adequately discount redundant information, we predicted the existence of a cumulative redundancy bias. Accordingly, perceivers’ impressions are systematically biased by the unfolding of a performance sequence when observations are cumulative. This bias favors leading competitors and persists even when the end result of the performance sequence is known. We demonstrated this cumulative redundancy bias in 8 experiments in which participants had to sequentially form impressions about the qualities of two competitors from different performance domains (i.e., computer algorithms, stocks, and soccer teams). We consistently found that perceivers’ impressions were biased by cumulative redundancy. Specifically, impressions about the winner and the loser of a sequence were more divergent when the winner took an early lead compared with a late lead. When the sequence ended in a draw, participants formed more favorable impressions about the competitor who was ahead during most observations. We tested and ruled out several alternative explanations related to primacy effects, counterfactual thinking, and heuristic beliefs. We discuss the wide-ranging implications of our findings for impression formation and performance evaluation.

That is from a new paper by Hans Alves and André Mata, via the excellent Kevin Lewis.