Month: October 2013
From Down Under, one of the few countries to allow commercial drone deliveries:
Australian textbook rental startup Zookal will begin utilizing drones to make its deliveries in Australia next year, with ambitions of bringing the unique, unmanned delivery method to US customers by 2015. The company says this marks the first commercial use of fully automated drones worldwide. It will fulfill deliveries in Sydney using six drones to start, dropping off textbook purchases at an outdoor location of the customer’s choosing. To wipe away any potential privacy or surveillance fears, the drones aren’t equipped with cameras. Instead, built-in anti-collision technology keeps them clear of trees, buildings, birds, and other potential obstacles.
Both the location of the user and the drone’s GPS coordinates are transmitted via a smartphone app, and Zookal claims deliveries can be completed in as little as two to three minutes once a drone takes flight. You can track the drone’s progress from the app (which will only be available on Android at launch) and head outside once it’s getting close. The drone never fully lowers itself to ground level, but rather hovers overhead and lowers its textbook delivery with the tap of a button on your smartphone.
I’ve never understood this argument, which is sometimes cited as a reason to go to a non-Indian restaurant on a given day. How should people cope who live in India? They have Indian food many, many days in a row, and often (not always, by any means) poorer Indians are choosing from a less varied menu of that food than Americans who visit Indian restaurants. Would it be so terrible to eat only Indian food, whether at home or in restaurants, every day for a week? Every day for a month? I don”t see why. So how about two days in a row? Or two meals in a row? Three? What if you had Indo-Chinese food somewhere in the middle of the sequence? Momos cooked by Nepalese immigrants?
Until a group meal yesterday, I had Korean food five days in a row, three meals a day, much to my joy. I bet some Koreans, in Korea, did the same.
That’s real apples, not the company. Timothy Taylor reports:
But even within the production of apples, there is global specialization. The US economy both exports and imports apples, depending on the season, but overall runs a trade surplus in apples. However, the U.S. runs a substantial trade deficit in frozen apple juice concentrate, relying heavily on imports from China. Here are some statistics about U.S. trade in apples from the U.S. Department of Agriculture (which are helpfully archived on-line at Cornell University).
You should note that both increasing returns models and Heckscher-Ohlin approaches to trade can give rise to what is commonly called “specialization,” and thus citing specialization does not answer the question posed in the title of this post and it need not necessarily favor increasing returns to scale models.
By the way, here is a new Paul Krugman paper on trade (pdf).
Brink’s new paper is here, here is one excerpt:
Consider the four constituent elements of economic growth tracked by conventional growth accounting: (1) growth in labor participation, or annual hours worked per capita; (2) growth in labor quality, or the skill level of the workforce; (3) growth in capital deepening, or the amount of physical capital invested per worker; and (4) growth in so-called total factor productivity, or output per unit of quality-adjusted labor and capital. Over the course of the 20th century, these various components fluctuated in their contributions to overall growth. The fluctuations, however, tended to offset each other, so that weakness in one element was compensated for by strength in another. In the 21st century, this pattern of offsetting fluctuations has come to a halt as all growth components have fallen off simultaneously.
The simultaneous weakening of all the components of economic growth does not mean that slow growth is inevitable from here on out. The trends for one or more of them could reverse direction tomorrow. Nevertheless, it is difficult to resist the conclusion that the conditions for growth are less favorable than they used to be. In other words, growth is getting harder.
Brink offers further remarks here. On October 29, at noon, I’ll be doing a Cato Forum with Brink on this paper.
We exploit a policy discontinuity at U.S. state borders to identify the effects of unemployment insurance policies on unemployment. Our estimates imply that most of the persistent increase in unemployment during the Great Recession can be accounted for by the unprecedented extensions of unemployment benefit eligibility. In contrast to the existing recent literature that mainly focused on estimating the effects of benefit duration on job search and acceptance strategies of the unemployed — the micro effect — we focus on measuring the general equilibrium macro effect that operates primarily through the response of job creation to unemployment benefit extensions. We find that it is the latter effect that is very important quantitatively.
There is an ungated version of the paper here (pdf).
One in ten Icelanders will publish one [a book].
“Does it get rather competitive?” I ask the young novelist, Kristin Eirikskdottir.
“Yes. Especially as I live with my mother and partner, who are also full-time writers. But we try to publish in alternate years so we do not compete too much.”
Mashable: Street artist Banksy set up a stall in New York’s Central Park Saturday, selling his original pieces — worth tens of thousands of dollars each — for $60.
The event was documented on video and posted on Banksy’s website. It took several hours for the first artwork to be sold, to a lady who managed to negotiate a 50% discount for two small canvases. There were only two more buyers, and by 6 p.m. the stall was closed with total earnings of $420.
For comparison, in 2007 Banksy’s work “Space Girl & Bird” was purchased for $578,000, and in 2008 his canvas “Keep it Spotless” was sold for $1,870,000.
What would Fama, Shiller and Hansen say about these asset prices?
Maximizing revenue for non-reproducible art is a matching process, the artist must find the handful of buyers in the world willing to pay the most (see An Economic Theory of Avant-Garde and Popular Art) so perhaps one can explain this as a failure of marketing.
An alternative explanation is that modern art is a bubble, people buy only because they expect to sell to others–take away this expectation and the art doesn’t sell. (Fashions and fads can help the latter explanation a long but there still needs to be an expectation of a future
Or perhaps Banksy is commenting on an earlier Nobel winner.
There is a recent paper (pdf) by Handel, Hendel, and Whinston, and it covers the issue of adverse selection through the new ACA exchanges. The second paragraph of the abstract is this:
We find that market unravelling from adverse selection is substantial under the proposed pricing rules in the Affordable Care Act (ACA), implying limited coverage for individuals beyond the lowest coverage (Bronze) health plan permitted. Although adverse selection can be attenuated by allowing (partial) pricing of health status, our estimated risk preferences imply that this would create a welfare loss from reclassification risk that is substantially larger than the gains from increasing within-year coverage, provided that consumers can borrow when young to smooth consumption or that age-based pricing is allowed. We extend the analysis to investigate some related issues, including (i) age-based pricing regulation (ii) exchange participation if the individual mandate is unenforceable and (iii) insurer risk-adjustment transfers.
The core result here does not require individuals to violate the legal mandate, although the paper has a good and sobering discussion of that topic as well. The adverse selection here is occurring across plans of differing quality. On the positive side, having everyone enrolled in the “least comprehensive” plan can be a plus rather than a minus, depending on your point of view. On the negative side, the paper does not consider how suppliers might respond by limiting the quality of their network and making the least comprehensive plan even less useful.
For the pointer I thank Dan in Euroland.
The global financial crisis and ensuing Great Recession reduced the income and wealth of many families, but older families generally fared better than young and middle-aged families. The Federal Reserve’s Survey of Consumer Finances reveals that being young was a significant risk factor during the downturn, regardless of a family’s race, ethnicity, or education level. Among older families, those headed by someone 70 or over fared slightly better than those headed by someone between 62 and 69. Income and wealth also increased most strongly among older families during the two decades preceding the crisis. Part of the explanation for favorable income and wealth trends among currently living older Americans is a positive birth-year cohort effect. After controlling for a host of factors related to income and wealth, we find that cohorts born in the late 1930s and 1940s have experienced more favorable income and wealth trajectories over their life courses than earlier- or later-born cohorts. While it is too soon to know how cohorts born in recent decades will fare over their lifetimes, it appears that the median Baby Boomer (born in the 1950s and early 1960s) and median member of Generation X (born in the late 1960s and 1970s) are on track for lower income and wealth in older age than those born in the 1930s and 1940s, holding constant many factors other than when a person was born.
One driving force seems to be that the older generation was simply more motivated to save. And here is a dramatic sentence:
Among young and middle-aged families, the median levels of net worth were 30.5 percent and 24.1 percent lower in 2010 than in 1989, respectively.
The paper presents many interesting facts, but I would start with pp.9-10.
From The Guardian, here is an argument that middle class youth in Great Britain today will end up faring worse than did their parents.
4. An attorney general you can believe in (Maryland, police cars, sirens, driving on the shoulder, etc.).
Hansen’s work is the most technical and most difficult to explain to a layperson. The brief version is that in 1982 Hansen developed the Generalized Method of Moments a new and elegant way to estimate many economic models that requires fewer assumptions and is often more powerful than other methods.
Here is the basic idea in a nutshell. The method of moments is an old and intuitive technique for estimating the parameters of a data generating process. A moment is an expectation of the form E(X^r), where r can be an integer. For example if r=1 then the first moment, E(X), is just the mean (you may also know that together the first and second moments, E(X^2), define the variance). If the true mean in the data generating process is M then we can write a moment condition, E(X)-M=0. Now the method of moments says to estimate M we should solve that condition by replacing, E(X), with the sample mean. In other words, a good estimator for the unknown population mean is the sample mean, i.e. the mean in the data that you have. Pretty obvious so far.
Now let’s start to generalize. First, there are many moments other than the mean and variance. Indeed economic theory often provides moment conditions that may be written E(f(X,M))=0 where M now stands for a moment not necessarily the mean and f can be a non-linear function. For example, rational expectation models often provide conditions that E(f(X))-M=0, i.e. that forecasts should equal true values, or macro models imply that various differences, such as in consumption levels should not be correlated and so forth. Indeed, finance and macroeconomic theory provided a surplus of moment conditions and many of these different conditions imply something about the same parameter. Now, and this is key, when we have more moment conditions than parameters we can’t choose the parameters to make all the moment conditions true, i.e. we can’t make all those moment conditions equal to zero. So what to do?
What Hansen did with the generalized method of moments is show that when we have more moment conditions than parameters we can best estimate those parameters by giving more weight to the conditions that we have better information about. In other words, if we have two conditions and we can’t force both of them to zero by a choice of parameter then choose the parameter such that the moment condition we know the most about (least variance) is closer to zero than the one we know less about. Again, the idea is intuitive, but Hansen showed how to make these choices and then he proved that when the parameters are chosen in this way they have good statistical properties such as consistency (they get closer to the true values as the sample size increases). Importantly, estimating a model using these moment conditions does not require untenable assumptions on the entire distribution of returns. Hansen then also showed, such as with Hansen and Hodrick (1980) and Hansen and Singleton (1982, 1983) how these methods could be applied to a large class of macro models and finance models including asset pricing, the latter of which links Hansen with the work of Fama and Shiller as does the important bound discovered by Hansen and Jaganatthan (1991).
This is a prize given to time series econometrics and how to deal with imperfect data and changing variances for variables being estimated. Can you say “Generalized Method of Moments” (GMM)? Hansen teaches at the economics department of the University of Chicago.
For years now journalists have asked me if Hansen might win, and if so, how they might explain his work to the general reading public. Good luck with that one.
Unlike maximum likelihood estimation (MLE), GMM does not require complete knowledge of the distribution of the data. Only specified moments derived from an underlying model are needed for GMM estimation. In some cases in which the distribution of the data is known, MLE can be computationally very burdensome whereas GMM can be computationally very easy. The log-normal stochastic volatility model is one example. In models for which there are more moment conditions than model parameters, GMM estimation provides a straightforward way to test the specification of the proposed model. This is an important feature that is unique to GMM estimation.
If you read this piece with Hodrick, you will see that Hansen’s work is instrumental for testing the advanced versions of the propositions of Fama and Shiller. In this critical sense, the three prizes are quite tightly unified. And see this paper too with Singleton. Here’s the most concrete sentence you are going to squeeze out of me on this one: if you want to do serious analysis of whether changing risk premia can help rationalize observed asset price movements, Hansen’s contributions will prove essential.
Robert Shiller is best known for warning about the internet stock market bubble and later the housing bubble. What is most impressive to me, however, is that most people who think that markets can be inefficient are anti-market. Shiller’s solution to market problems, however, is more markets! The housing market, for example, has traditionally had two problems. Since each house is unique there has been no market index of housing prices so that people couldn’t easily see bubbles and if they could see them on the ground there was no easy way to short the market (to try to profit from the bubble in a way that would moderate the bubble). Moreover, because there haven’t been good housing indexes a very large amount of each average person’s wealth has been tied up with an asset that can fluctuate substantially in price. Most house buyers, in other words, are putting all their eggs in one basket and crossing their fingers that the basket doesn’t go bust. In recent years, that has been a very unfortunate bet.
Shiller’s solution to the problems in the housing market has been to make the market better—he created with Case and Weiss–the Case-Shiller Index. For the first time, it’s possible to see in real time housing prices and compare with averages over time and it possible to buy options and futures on the index which will help for forecasting. Moreover, it’s possible that in the future insurance products can be built based on local versions of the index–thus you could insure yourself against big declines in the price of housing in your neighborhood.
Shiller’s housing index is also a window into how macro markets could also be used to create livelihood insurance, a type of private unemployment insurance. Moral hazard and adverse selection make it difficult to protect any single individual from unemployment but indexes in the unemployment rate of dentists or construction workers could be used to provide some insurance for workers in these fields when conditions in their entire industry are poor.
I featured one of Shiller’s biggest ideas in Entrepreneurial Economics, markets in GDP. A GDP market would allow shares of GDP to be bought and sold, add to this Hansonian prediction markets and you are long way towards an ideal way to evaluate the effect of major policies. Moreover, a GDP market would allow the creation of many insurance products. We are all less diversified than is ideal. It would be optimal to trade some shares in US GDP for shares in World GDP which is more stable. We can do this if we create Shiller GDP markets throughout the world.
Shiller’s book Macro Markets is truly visionary and I hope the Nobel brings a lot of attention to these ideas.