Results for “concentration”
182 found

The distribution of cities, then and now

In today’s developed countries, cities are thus scattered across historically important agricultural areas; as a result, there is a relatively higher degree of spatial equality in the distribution of resources within these countries.  By contrast, in today’s developing countries, cities are concentrated more on the coast where transport conditions, compared to agricultural suitability, are more favorable.

That is from Henderson, Squires, Storeygard, and Weil in the January 2018 QJE, based on light data measured by satellites.  Overall, I view this regularity as a negative for the prospects for liberalism and democracy in emerging economies, as urban concentration can encourage too much rent-seeking and kleptocracy.  It also reflects the truly amazing wisdom of (some of) our Founding Fathers, who saw a connection between liberty and decentralized agrarianism.  It suggests a certain degree of pessimism about China’s One Belt, One Road initiative.  The development of the hinterland in the United States may not be a pattern that today’s emerging economies necessarily should or could be seeking to replicate.  Which makes urban economics and Henry George all the more important.

Is legal marijuana more potent?

Yes, here is Keith Humphreys from Wonkblog:

Although some people believe prohibiting drugs is what makes their potency increase, the potency of marijuana under legalization has disproved that idea. Potency rises in both legal and illegal markets for the simple reason that it conveys advantages to sellers. More potent drugs have more potential to addict customers, thereby turning them into reliable profit centers.

In other legal drug markets, regulators constrain potency. Legal alcohol beverage concentrations are regulated in a variety of ways, including through different levels of tax for products of different strengths as well as constraints on labeling and place of sale. In most states, for a beverage to be marketed and sold as “beer,” its alcohol content must fall within a specified range. Similarly, if wine is distilled to the point that its alcohol content rises too high, some states require it be sold as spirits (i.e., as “brandy”) and limit its sale locations.

As states have legalized marijuana, they have put no comparable potency restrictions in place, for example capping THC content or levying higher taxes on more potent marijuana strains. Sellers are doing the economic rational thing in response: ramping up potency.

How about the Netherlands?:

The study was conducted in the Netherlands, where marijuana is legally available through “coffee shops.” The researchers examined the level of delta-9-tetrahydrocannabinol (THC), the main intoxicant in marijuana, over a 16-year period. Marijuana potency more than doubled from 8.6 percent in 2000 to 20.3 percent in 2004, which was followed by a surge in the number of people seeking treatment for marijuana-related problems. When potency declined to 15.3 percent THC, marijuana treatment admissions fell thereafter. The researchers estimated that for every 3 percent increase in THC, roughly one more person per 100,000 in the population would seek marijuana use disorder treatment for the first time.

The Dutch findings are relevant to the United States because high THC marijuana products have proliferated in the wake of legalization. The average potency of legal marijuana products sold in the state of Washington, for example, is 20 percent THC, with some products being significantly higher.

I believe that marijuana legalization has moved rather rapidly into being an overrated idea.  To be clear, it is still an idea I favor.  It seems to me wrong and immoral to put people in jail for ingesting substances into their body, or for aiding others in doing so, at least provided fraud is absent in the transaction.  That said, IQ is so often what is truly scarce in society.  And voluntary consumption decisions that lower IQ are not something we should be regarding with equanimity.  Ideally I would like to see government discourage marijuana consumption by using the non-coercive tools at its disposal, for instance by making it harder for marijuana to have a prominent presence in the public sphere, or by discouraging more potent forms of the drug.  How about higher taxes and less public availability for more potent forms of pot, just as in many states beer and stronger forms of alcohol are not always treated equally under the law?

Why has investment been weak?

Germán Gutiérrez and Thomas Philippon have a new paper on this topic:

We analyze private fixed investment in the U.S. over the past 30 years. We show that investment is weak relative to measures of profitability and valuation — particularly Tobin’s Q, and that this weakness starts in the early 2000’s. There are two broad categories of explanations: theories that predict low investment along with low Q, and theories that predict low investment despite high Q. We argue that the data does not support the first category, and we focus on the second one. We use industry-level and firm-level data to test whether under-investment relative to Q is driven by (i) financial frictions, (ii) changes in the nature and/or localization of investment (due to the rise of intangibles, globalization, etc), (iii) decreased competition (due to technology, regulation or common ownership), or (iv) tightened governance and/or increased short-termism. We do not find support for theories based on risk premia, financial constraints, safe asset scarcity, or regulation. We find some support for globalization; and strong support for the intangibles, competition and short-termism/governance hypotheses. We estimate that the rise of intangibles explains 25-35% of the drop in investment; while Concentration and Governance explain the rest. Industries with more concentration and more common ownership invest less, even after controlling for current market conditions and intangibles. Within each industryyear, the investment gap is driven by firms owned by quasi-indexers and located in industries with more concentration and more common ownership. These firms return a disproportionate amount of free cash flows to shareholders. Lastly, we show that standard growth-accounting decompositions may not be able to identify the rise in markups.

On the road I have yet to read it, but it looks like one of the most important papers of the year.

Were U.S. nuclear tests more harmful than we had thought?

So says Keith A. Meyers, job candidate from University of Arizona.  I found this to be a startling result, taken from his secondary paper:

During the Cold War the United States detonated hundreds of atomic weapons at the Nevada Test Site. Many of these nuclear tests were conducted above ground and released tremendous amounts of radioactive pollution into the environment. This paper combines a novel dataset measuring annual county level fallout patterns for the continental U.S. with vital statistics records. I find that fallout from nuclear testing led to persistent and substantial increases in overall mortality for large portions of the country. The cumulative number of excess deaths attributable to these tests is comparable to the bombings of Hiroshima and Nagasaki.

Basically he combines mortality estimates with measures of Iodine-131 concentrations in locally produced milk, “to provide a more precise estimate of human exposure to fallout than previous studies.” The most significant effects are in the Great Plains and Central Northwest of America, and “Back-of-the-envelope estimates suggest that fallout from nuclear testing contributed between 340,000 to 460,000 excess deaths from 1951 to 1973.”

His primary job market paper is on damage to agriculture from nuclear testing.

Is there a glass ceiling for female artists?

Using a unique data set consisting of the population of fine art auctions from 2000 to 2017 for Western artists, we provide strong empirical evidence for a glass ceiling for female artists. First, we show that female artists are less likely to transition from the primary (gallery) into the secondary (auction) market. This glass ceiling results in a selection mechanism which is manifested in an average premium of 6% for artworks by female artists. Second, this premium is driven by a small number of women located at the top of the market and turns into a discount when we account for the number of artworks sold. The superstar effect, where a small number of individuals absorbs the majority of industry revenues, is amplified for the group of female artists. Third, at the top 0.1% of the market artworks by female artists are traded at a discount of 9%. Moreover, the very top 0.03% of the market, where 41% of the revenues are concentrated, are still entirely off limits for women. Overall, we find two glass ceilings for women pursuing an artistic career. While the first one is located at the starting point of a female artist’s career, the second one can be found at the transition into the superstar league of the market and remains yet impermeable. Our study has wide-reaching implications for industries characterized by a superstar effect and a strong concentration of men relative to women.

That is the abstract of a new paper by Fabian Y.R.P. Bocart, Marina Gertsberg, and Rachel A. J. Pownal, via the excellent Kevin Lewis.

Recently I’ve been enjoying @womensart1, a good way to see interesting artworks that otherwise don’t get so much attention.  And here is my older essay “Why Women Succeed, and Fail, in the Arts.”

Hacking the Nazis

Some resisters fought the Nazis in the streets while others fought them from within by hacking some of the world’s first information technology systems. Ava Ex Machina has a fascinating post discussing some of these unheralded hackers. Here is one:

René Carmille — was a punch card computer expert and comptroller general of the French Army, who later would head up the Demographics Department of the French National Statistics Service. As quickly as IBM worked with the Nazis to enable them to use their punch card computer systems to update census data to find and round up Jewish citizens, Rene and his team of double-agents worked just as fast to manipulate their data to undermine their efforts.

The IEEE newspaper, The Institute, describes Carmille as being an early ethical hacker: “Over the course of two years, Carmille and his group purposely delayed the process by mishandling the punch cards. He also hacked his own machines, reprogramming them so that they’d never punch information from Column 11 [which indicated religion] onto any census card.” His work to identify and build in this exploit saved thousands of Jews from being rounded up and deported to death camps.

Rene was arrested in Lyon in 1944. He was interrogated for two days by Klaus Barbie, a cruel and brutal SS and Gestapo officer called “the Butcher of Lyon,” but he still did not break under torture. Rene was caught by the Nazis and sent to the Dachau concentration camp, where he died in 1945.

Hat tip: Tim Harford.

What is the incidence of a tax on tuition waivers?

Here is some basic info, in 2011-2012 145,000 graduate students received tuition waivers.  Monday I suggested such a tax is a bad idea, but who would bear the burden?  Let’s say there are three parties, the universities, the graduate students, and third-party funders who support research and graduate students.  Those third parties may be for instance Harvard donors or the National Science Foundation.

The short-run, first-order effect is that the grad students pay tax on their waivers and fewer of them pursue postgraduate studies.  And if grad students are dead set on attending no matter what, they bear a relatively high burden of the tax.

That said, there is more to the story.  Universities seek to attract graduate students for multiple reasons, with two possible options being “to enhance their prestige” or “to boost revenue,” or some mix of the two.  It will matter.

To make up for (some of) the tax, and maintain the flow of students, universities will opt for some mix of lowering their tuition and increasing stipends and increasing non-taxed forms of aid, such as quality of office space or teaching opportunities for grad students.  If universities seek to boost their prestige, they will be quite keen to keep up their “Q,” and not eager to lower Q, even with higher P as recompense on the revenue side.  In that case a relatively high share of the burden will fall on universities.

In contrast, if universities pursue revenue, they are more willing to live with a lower Q if accompanied by a higher P.  More of the burden will fall on students, because the accompanying enrollment-maintaining compensations from the universities will be accordingly lower.

I don’t know of a paper estimating the effects of taxing student fellowships, an innovation from the Reagan tax reforms of 1986.  Can any of you lend a hand here?  It didn’t seem to much slow the growth of graduate education as far as I can tell, so perhaps the burden there was born by universities.

Now enter the third parties.  Donors might give more funds to universities to help make up for taxed tuition waivers.  If you are a Harvard alum, for instance, you might wish to see Harvard carry on its great traditions with yet another generation of Ph.d economists who initially received tuition waivers.  In other words, you want prestige as an alum and that requires keeping up the flow of Q, number of quality students, through the program.  Donors will give more resources to the universities, or to the students (through other vehicles), to help make up for the new tax.  In words, to the extent the donors covet prestige, more of the tax will fall on them.  This is a tax on prestige-seeking!

My intuition is that the schools with a strong donor base will put in much more effort to raise money for graduate students, and they will meet with a fair degree of success.  (Note that Harvard’s now-bigger fundraising campaign will to some extent distract the attention of the president and other senior leaders from other programmatic activities at Harvard; in the longer run that could harm Harvard stakeholders.)  But schools below the top tier don’t so much have this option, so they will decline in resources and status relative to the very top schools.  This is p classic case of how imposing new burdens leads to higher market concentration and cementing in the status of the elites, in this case the educational elites.

Throughout, I am assuming the universities cannot evade the tax outright, for instance by relabeling the categories of tuition and tuition waiver to avoid the bite altogether.  But that is another possible equilibrium, if the details of the law so allow.

Is Piketty’s Data Reliable?

When Thomas Piketty’s Capital in the Twenty-First Century first appeared many economists demurred on the theory but heaped praise on the empirical work. “Even if none of Piketty’s theories stands up,” Larry Summers argued, his “deeply grounded” and “painstaking empirical research” was “a Nobel Prize-worthy contribution”.

Theory is easier to evaluate than empirical work, however, and Phillip Magness and Robert Murphy were among the few authors to actually take a close look at Piketty’s data and they came to a different conclusion:

We find evidence of pervasive errors of historical fact, opaque methodological choices, and the cherry-picking of sources to construct favorable patterns from ambiguous data.

Magness and Murphy, however, could be dismissed as economic history outsiders with an ax to grind. Moreover, their paper was published in an obscure libertarian-oriented journal. (Chris Giles and Ferdinando Giugliano writing in the FT also pointed to errors but they could be dismissed as journalists.) The Magness and Murphy conclusions, however, have now been verified (and then some) by a respected figure in economic history, Richard Sutch.

I have never read an abstract quite like the one to Sutch’s paper, The One-Percent across Two Centuries: A Replication of Thomas Piketty’s Data on the Distribution of Wealth for the United States (earlier wp version):

This exercise reproduces and assesses the historical time series on the top shares of the wealth distribution for the United States presented by Thomas Piketty in Capital in
the Twenty-First Century….Here I examine Piketty’s US data for the period 1810 to 2010 for the top 10 percent and the top 1 percent of the wealth distribution. I conclude that Piketty’s data for the wealth share of the top 10 percent for the period 1870 to 1970 are unreliable.
The values he reported are manufactured from the observations for the top 1 percent inflated by a constant 36 percentage points. Piketty’s data for the top 1 percent of the distribution for the nineteenth century (1810–1910) are also unreliable. They are based
on a single mid-century observation that provides no guidance about the antebellum trend and only tenuous information about the trend in inequality during the Gilded Age. The values Piketty reported for the twentieth century (1910–2010) are based on more
solid ground, but have the disadvantage of muting the marked rise of inequality during the Roaring Twenties and the decline associated with the Great Depression. This article offers an alternative picture of the trend in inequality based on newly available data and a reanalysis of the 1870 Census of Wealth. This article does not question Piketty’s integrity.

You know it’s bad when a disclaimer like that is necessary. In the body, Sutch is even stronger. He concludes:

Very little of value can be salvaged from Piketty’s treatment of data from the nineteenth century. The user is provided with no reliable information on the antebellum trends in the wealth share and is even left uncertain about the trend for the top 10 percent during
the Gilded Age (1870–1916). This is noteworthy because Piketty spends the bulk of his attention devoted to America discussing the nineteenth-century trends (Piketty 2014: 347–50).

The heavily manipulated twentieth-century data for the top 1 percent share, the lack of empirical support for the top 10 percent share, the lack of clarity about the procedures used to harmonize and average the data, the insufficient documentation, and the spreadsheet errors are more than annoying. Together they create a misleading picture of the dynamics of wealth inequality. They obliterate the intradecade movements essential to an understanding of the impact of political and financial-market shocks on inequality. Piketty’s estimates offer no help to those who wish to understand the impact of inequality on “the way economic, social, and political actors view what is just and what is not” (Piketty 2014: 20).

One of the reasons Piketty’s book received such acclaim is that it fed into concerns about rising inequality and it’s important to note that Sutch is not claiming that inequality hasn’t risen. Indeed, in some cases, Sutch argues that it has risen more than Piketty claims. Sutch is rather a journeyman of economic history upset not about Piketty’s conclusions but about the methods Piketty used to reach those conclusions.

My Conversation with Larry Summers

Larry was in superb form, and we talked about mentoring, innovation in higher education, monopoly in the American economy, the optimal rate of capital income taxation, philanthropy, Hermann Melville, the benefits of labor unions, Mexico, Russia, and China, Fed undershooting on the inflation target, and Larry’s table tennis adventure in the summer Jewish Olympics. Here is the podcast, video, and transcript.

Here is one excerpt:

SUMMERS: Second, the VIX — people tend to underappreciate this. The volatility of the market moves very much with the level of the market. The reason is that if a company has $100 of debt and $100 of equity, and then the stock market goes up, it’s 50/50 levered.

If the stock market goes up by $100, then it has $100 of debt and $200 of equity and it’s only one-third levered. So when the stock market goes up, its volatility naturally goes down. And the stock market has gone way up over the last 10 months. That’s a factor operating to make its volatility go significantly down.

It’s also the case if you look at surprises. The magnitude of errors in the consensus estimates of company profits or the consensus estimates of industrial production or what have you, numbers have been coming in close to consensus to an unusual degree over the last few months.

I think all those things contribute to the relatively low level of the VIX, but those are more in the way of ex post explanations. If you had told me everything that was going on in the world and asked me to guess where the VIX would be, I would expect it to have been a little higher than it is right now.

And:

COWEN: If there’s an ongoing demand shortfall, as is suggested by many secular stagnation approaches, does that mean monopoly cannot be a major economic problem because that’s from the supply side, and that the supply side constraint isn’t really binding if you think of there as being multiple Lagrangians. Forgive me for getting technical for a moment. Do you see what I’m saying?

SUMMERS: That wouldn’t have been the way I’d have thought about it, Tyler, but what you’re saying might be right. I think I’d be inclined to say that, if there’s more monopoly, there’s more money going to monopoly firms where there’s a low propensity to spend it, both because the firms don’t invest and because the owners of the firms tend to be rich or endowments that have a low propensity to spend.

So the greater monopoly power, to the extent that it exists, is one factor operating to raise savings and reduce investment which contributes to demand shortfalls and secular stagnation.

I also think that there’s likely to be less entry in competition in markets that aren’t growing rapidly than there is in markets that are growing rapidly. There’s a sense in which less demand over time creates its own lack of supply.

And:

COWEN: What mental qualities make for a good table tennis player?

SUMMERS: Judging by my performance, qualities that I do not possess.

[laughter]

SUMMERS: I think a deft wrist, a certain capacity for concentration, and a great deal of practice. While I practiced intensely in the run-up to the activity, there were other participants who had been practicing intensely for decades. And that gave them a substantial advantage.

Recommended!

If you think you know someone who is very smart, Larry is almost certainly smarter.

The Gender Gap in STEM is NOT What You Think

In a new NBER working paper David Card and Abigail Payne have a stunning new explanation of the gender gap in STEM at universities. The conventional wisdom is that the gender gap is about women and the forces–discrimination, sexism, parenting, aptitudes, choices; take your pick–that make women less likely to study in STEM fields. Card and Payne are saying that the great bulk of the gap is actually about men and their problems. At least that is my interpretation of their results, the authors, to my mind, don’t clearly state just how much their results run against the conventional wisdom. (Have I misunderstood their paper? We shall see.)

The authors are using a large data set on Canadian high school students that includes data on grade 12 (level 4) high school classes and grades and initial university program. Using this data, the authors find that females are STEM ready:

…At the end of high school, females have nearly the same overall rate of STEM readiness as males, and
slightly higher average grades in the prerequisite math and science courses.  The mix of STEM related courses taken by men and women is different, however, with a higher concentration of women in biology and chemistry and a lower concentration in physics and calculus.

Since females are STEM-ready when leaving high school you are probably thinking that the gender gap must be a result either of different entry choices conditional on STEM-readiness or different attrition rates. No. Card and Payne say that entry rates and attrition rates are similar for males and females. So what explains why males are more likely to take a STEM degree than females?

The main driver of the gender gap is the fact that many more females (44%) than males (32%) enter university.  Simply assuming that non‐STEM ready females had the same university entry rate as non‐STEM ready males would
narrow the gender gap in the fraction of university entrants who are STEM ready from 14
percentage points to less than 2 percentage points.

Moreover:

On average, females have about the same average grades in UP (“University Preparation”, AT) math and sciences courses as males, but higher grades in English/French and other qualifying courses that count toward the top 6 scores that determine their university rankings. This comparative advantage explains a substantial share of the gender difference in the probability of pursing a STEM major, conditional on being STEM ready at the end of high school.

Put (too) simply the only men who are good enough to get into university are men who are good at STEM. Women are good enough to get into non-STEM and STEM fields. Thus, among university students, women dominate in the non-STEM fields and men survive in the STEM fields. (The former is mathematically certain while the latter is true only given current absolute numbers of male students. If fewer men went to college, women would dominate both fields). I don’t know whether this story will hold up but one attractive feature, as a theory, is that it is consistent with the worrying exit from the labor market of men at the bottom.

If we accept these results, the gender gap industry is focused on the wrong thing. The real gender gap is that men are having trouble competing everywhere except in STEM.

Hat tip: Scott Cunningham.

The new world of monopoly? What about flying?

I frequently see airlines cited as an example where the American economy is obviously more monopolistic.  By some metrics, yes, but what about the final deal?:

For more than three years, the average one-way fare between Detroit and Philadelphia never dipped below $308, and sometimes moved higher, topping $385 at one point.

But then, early in 2016, fares suddenly started to fall, according to data from the Bureau of Transportation Statistics. By the end of the year, the average one-way ticket between the two cities stood at just $183.

What changed? The primary factor was Spirit Airlines [a budget carrier].

…Even as a wave of mergers has cut the number of major carriers to four and significantly reduced competition, lower-cost airlines continue to play a role in moderating ticket costs.

…The cost of a round-trip domestic ticket averaged more than $490 in the first half of the year, up slightly compared with 2016, according to Airlines Reporting Corporation, a company that settles flight transactions between a number of carriers and booking services like Expedia.

The jostling, however, has left airline investors skittish. As the publicly traded airlines in July reported earnings for the second quarter, shareholders sold off their shares, worried about the fight over fares and capacity increases.

That is from Micah Maidenberg at the NYT.  In other words, the market still has a fair amount of contestability.

Or consider some more aggregated data.  As for output restrictions, here is the DOT series on aggregate miles flown.  No doubt, there are problems around the time of 9/11 and also the Great Recession, with 2008-2012 being a period of slight quantity contraction.  But in 1985 there were 275,864 [million] total miles flown, in 2006 it was 588,471, and 641, 905 in 2015.  I’ll ask again: if there is so much extra monopoly, where are the output restrictions?

Or look at the price index.  Overall prices are down considerably since 2008, and from about 2000 to 2016 they run from about 250 (eyeballing) to about 270, noting 1998-2010 saw a huge run-up in oil prices.  Since 2005, the U.S. went from having nine major airlines to four.

Maybe you’re upset about quality, but baggage lost each year — one of the easier quality variables to measure — is going down steadily.

Is this perfect competition?  No, of course not.  Is this ideal performance?  No.  Will looking at concentration ratios help you understand the industry very well?  Even more no.  And this is one of the worst cases of changing concentration ratios I can find.  Tomorrow, shall we do booksellers?  Or do I not even need to bother?

Excellent points about the changing nature of mark-ups

Do read the whole post (who wrote it?), but here are a few choice excerpts:

My take is that large US firms with dominant brands/market positions have always commanded monopolistic rents. What has changed is that the rents, which used to be more broadly shared by stakeholders, are now predominantly flowing to shareholders and top management. The change has been driven by the interplay of several developments–shareholder revolution, globalization, rising equity valuations, diminished growth expectations.

And:

…the categories of variable and fixed costs are not iron-clad. Obviously, over long enough time frames, all costs are variable. Leaving that aside, even production-line workers have firm-specific capital. So, the costs related to training them or firing them are not entirely variable. More important, the question is how businesses view employee costs. There is circumstantial evidence that businesses have gone from treating most employees as relatively fixed costs to relatively variable costs. Note that since 1980, the proportion of job losers as a share of those unemployed has risen, suggesting that firms increasingly view employees as a variable cost.

And:

Interestingly, note that depreciation as a share has really flattened since 2001–the period in which the NIPA data show the biggest rise in mark-up!

And:

Moreover, industry concentration is only back to the levels of early 1980s.

And:

Globalization probably has played some role in the increase in mark-up. First, the shift of low-end manufacturing means that what is left is high-end manufacturing with greater monopolistic power. So, mark-ups should increase simply from that shift. Unsurprisingly, the NIPA mark-up show that the rise in mark-up coincided with China’s entry into WTO and is also consistent with Autor’s findings about impact of Chinese competition. Second, if firms already have some pricing power, reducing costs through outsourcing should result in higher mark-ups.

You can see that there is much more to be contributed to this debate.

How many sellers are needed for markets to become competitive?

…competitive conduct changes quickly as the number of incumbents increases.  In markets with five or fewer incumbents, almost all variation in competitive conduct occurs with the entry of the second or third firm…once the market has between three and five firms, the next entrant has little effect on competitive conduct.

That is from Bresnahan and Reiss, “Entry and Competition in Concentrated Markets.

Part of their method is to compare doctor and dentist pricing practices across towns of different size, and thus across different numbers of providers.  Then they see where bigger numbers makes a difference in terms of pricing.  Plumbers and tire dealers are considered too.  One lesson seems to be that market concentration has to rise to very high levels to make a big difference in outcomes.

If you are wondering, the “sweet spot” for a town to have a single dentist or doctor is population between 700 and 900, at least circa the early 1990s.

Thursday assorted links

1. Noah Smith responds on market power.  I say concentration alone doesn’t mean much; that’s been accepted since the 1970s, and I still see no evidence for market power showing up as retail output restrictions at a higher pace, and that is the most direct and welfare-relevant prediction of the theory.  Without that evidence, the story doesn’t have support, and the burden of proof is on that side of the argument.  (Furthermore I am worried that they don’t even mention this test, much less perform it.)  And if intermediate input market power doesn’t “trickle down” into consumer goods output restrictions…it’s like that proverbial tree falling the forest.

And here is a Matt Yglesias dialogue with the authors, I haven’t heard it yet.

2. There is a world championship in Excel spreadsheets, and it was just won by a 17-year-old from northern Virginia.

3. Robin Hanson’s Age of Em TED talk.

4. Andrew Batson on Chekhov’s Sakhalin Island.

5. Julia Galef lists (but does not endorse) unpopular ideas.  I agree with very few of them, by the way, but they are intrinsically interesting to ponder.  What also strikes me is the implicit terms of debate, mostly moves toward greater social liberalism.  How about Christian or extremely non-egalitarian ideas?

6. Are “walking marriages” disappearing in Sichuan?