Month: October 2013
This spending, however, no longer yields rich returns. Going to university racks up tuition fees and keeps young people out of the job market for four years. After graduation it takes an average of 11 months to find a first job. Once found, the jobs remain better paid and more secure than the positions available to high-school graduates, but the gap is narrowing. The McKinsey Global Institute reckons that the lifetime value of a college graduate’s improved earnings no longer justifies the expense required to obtain the degree. The typical Korean would be better off attending a public secondary school and diving straight into work.
If the private costs are no longer worthwhile, the social costs are even greater. Much of South Korea’s discretionary spending on private tuition is socially wasteful. The better marks it buys do not make the student more useful to the economy. If one student spends more to improve his ranking, he may land a better job, but only at the expense of someone else.
Even in terms of a signaling model, it seems this spending has gone too far. And indeed this is showing up in the numbers:
The proportion of high-school graduates going on to higher education rose from 40% in the early 1990s to almost 84% in 2008. But since then, remarkably, the rate has declined (see chart 2). South Korea’s national obsession with ever higher levels of education appears to have reached a ceiling.
The article, from The Economist, is interesting throughout.
Many people are calling this book the novel of the year (reviews here). It’s pretty good and it held my attention — I read 780 pp. and was never tempted to quit. It is an ideal plane read, but I don’t expect it will stick with me. I put it in the “worth reading if you’ve read most of the other books you want to read” category, but that is not a space you should wish to inhabit.
2. Be unemployed (unless you do 1)
3. Have a meme go viral
4. Try an app relating to “the quantified self”. (Whether to track spending, sleep, etc.)
5. Make a mistake publicly on the internet where it will live on forever
Travel is prominent as well. These two items remind me of Ben Casnocha:
8. Learn about behavioural economics and the mistakes you might make, such as how you may be affected by projection bias. You can easily waste a lot of time being upset about things that won’t matter in time or going the wrong direction because of mistaken beliefs
9. Remember that xkcd comic strip about the value of becoming more efficient at a task? When you’re young, you stand to benefit for a lot longer from any positive improvements you can make. So figure out how to eat well, what kind of things make you happy, etc. Invest a lot of time in learning, not necessarily formally
She sums it up like this:
I feel that with increasing inequality, using your youth well is all the more important, something I bet Tyler Cowen would agree with.
What would you add to the list?
The Dallas Safari Club said Friday it aims to raise up to a million dollars for endangered black rhinoceroses by auctioning off a permit to kill one in Namibia. The move has raised the ire of wildlife preservation organizations, who question the move’s ethics.
Ben Carter, executive director of the Dallas Safari Club, told Agence France Presse the Namibian government “selected” his hunting club to auction a black rhino hunting permit for use in one of its national parks. Namibia has an annual quota to kill up to five black rhinos out of the southern African nation’s herd population of 1,795 animals.
“First and foremost, this is about saving the black rhino,” Carter said.
The permit is expected “to sell for at least $250,000, possibly up to $1 million,” and will be auctioned off at the Club’s annual convention from Jan. 9-12 next year. The Conservation Trust Fund for Namibia’s Black Rhino will receive 100 percent of the sale price, the Club said.
There is more here, courtesy of the excellent Mark Thorson.
Advertising uses repetition to increase consumers’ preference for brands. Initially, novel brands gain in popularity due to repetition, which increases the likelihood that consumers later buy the brands. Particularly for novel brands, excessive exposure and repetition is necessary to establish the brand name in the first place. Remember your initial irritation upon encountering the names YAHOO, GOOGLE and WIKIPEDIA for the first time; now they are imprinted in your brain.
Basic psychological research has already shown that the psychological mechanism behind this repetition effect is the easiness with which we perceive information. Repeatedly perceived information is easier to process for the brain, which saves capacity, and thus feels positive.
Concerning brand names, recent research by Sascha Topolinski and Fritz Strack has shown that this feeling of easiness and ensuing repetition effects actually stem from the mouth. Each time we encounter a person’s or product name, the lips and the tongue automatically simulate the pronunciation of that name. This happens covertly, that is, without our awareness and without actual mouth movements. During inner speech, the brain attempts to utter the novel name. When names are presented repeatedly, this articulation simulation is trained and thus runs more easily for repeated compared to novel names. Crucially, if this inner speech is disturbed, for instance during chewing gum or whispering another word, the articulation of words cannot be trained and the repetition effect vanishes. People who are chewing something are immune to word repetition, they do not prefer familiar words over novel ones.
The present study applied this to the real-world scenario of advertising in movie theaters. There, people usually consume popcorn and other snacks during watching commercials, which disturbs the inner articulation of brand names.
1. Is raising the Medicare age the right way to go? I recommend focusing on the doc fix instead.
2. Nigerian infrastructure markets in everything. Fun, yes, but also an interesting long thoughtful piece.
Here is their abstract (pdf):
We evaluate possible explanations for the absence of a persistent decline in inflation during the Great Recession and find commonly suggested explanations to be insufficient. We propose a new explanation for this puzzle within the context of a standard Phillips curve. If firms’ inflation expectations track those of households, then the missing disinflation can be explained by the rise in their inflation expectations between 2009 and 2011. We present new econometric and survey evidence consistent with firms having similar expectations as households. The rise in household inflation expectations from 2009 to 2011 can be explained by the increase in oil prices over this time period.
Writing on the paper, here is Jim Hamilton’s bottom line:
The phenomenon identified by Coibion and Gorodnichenko would undermine the Fed’s ability to stimulate the economy in a number of important respects. First, it makes it much more difficult for the Fed to try to justify its actions to the public on the grounds that inflation is currently too low. Second, if makes it harder for the Fed to stimulate the economy without raising inflation, particularly if one byproduct of stimulus efforts is an increase in the relative price of oil. Third, it implies that ex ante real interest rates, if we base that concept on the perceptions of large numbers of economically important decision makers, are extremely negative at the moment, casting doubt on the claim that a primary policy objective should be to make them even more negative.
By Jeff Sommer, it is interesting throughout. Here is one good part:
Shiller and Thaler helped to found the field of behavioral finance to help explain a lot of these anomalies. Where’s the difference between the two views, as you see it?
If I were to characterize what differentiates me from Shiller or Thaler, it’s basically we agree on the facts — there is variation in expected returns, which leads to some predictability in returns. Where we disagree is whether it’s rational or irrational. And there’s nothing in the available evidence that allows one to really settle that in a convincing way. The stuff that both Shiller and I have done has been very illuminating in terms of the behavior of returns. The interpretation of that is open for reasonable disagreement.
I think all points of view should get a full airing, and that’s why I’m thrilled to get the prize with Shiller.
Coming from the other side, here is Shiller on Fama.
The front end technology is not the problem here. It would be nice if it was the problem, because web page scaling issues are known problems and relatively easy to solve.
The real problems are with the back end of the software. When you try to get a quote for health insurance, the system has to connect to computers at the IRS, the VA, Medicaid/CHIP, various state agencies, Treasury, and HHS. They also have to connect to all the health plan carriers to get pre-subsidy pricing. All of these queries receive data that is then fed into the online calculator to give you a price. If any of these queries fails, the whole transaction fails.
Most of these systems are old legacy systems with their own unique data formats. Some have been around since the 1960′s, and the people who wrote the code that runs on them are long gone. If one of these old crappy systems takes too long to respond, the transaction times out.
Amazingly, none of this was tested until a week or two before the rollout, and the tests failed. They released the web site to the public anyway – an act which would border on criminal negligence if it was done in the private sector and someone was harmed. Their load tests crashed the system with only 200 simultaneous transactions – a load that even the worst-written front-end software could easily handle.
When you even contemplate bringing an old legacy system into a large-scale web project, you should do load testing on that system as part of the feasibility process before you ever write a line of production code, because if those old servers can’t handle the load, your whole project is dead in the water if you are forced to rely on them. There are no easy fixes for the fact that a 30 year old mainframe can not handle thousands of simultaneous queries. And upgrading all the back-end systems is a bigger job than the web site itself. Some of those systems are still there because attempts to upgrade them failed in the past. Too much legacy software, too many other co-reliant systems, etc. So if they aren’t going to handle the job, you need a completely different design for your public portal.
A lot of focus has been on the front-end code, because that’s the code that we can inspect, and it’s the code that lots of amateur web programmers are familiar with, so everyone’s got an opinion. And sure, it’s horribly written in many places. But in systems like this the problems that keep you up at night are almost always in the back-end integration.
The root problem was horrific management. The end result is a system built incorrectly and shipped without doing the kind of testing that sound engineering practices call for. These aren’t ‘mistakes’, they are the result of gross negligence, ignorance, and the violation of engineering best practices at just about every step of the way..
…“No way would Apple, Amazon, UPS, FedEx outsource their computer systems and software development, or their IT operations, to anyone else.”
You have to be kidding. How do you think SAP makes a living? Or Oracle? Or PeopleSoft? Or IBM, which has become little more than an IT service provider to other companies?
Everyone outsources large portions of their IT, and they should. It’s called specialization and division of labor. If FedEx’s core competence is not in IT, they should outsource their IT to people who know what they are doing.
In fact, the failure of Obamacare’s web portal can be more reasonably blamed on the government’s unwillingness to outsource the key piece of the project – the integration lead. Rather than hiring an outside integration lead and giving them responsibility for delivering on time, for some inexplicable reason the administration decided to make the Center for Medicare and Medicaid services the integration lead for a massive IT project despite the fact that CMS has no experience managing large IT projects.
Failure isn’t rare for government IT projects – it’s the norm. Over 90% of them fail to deliver on time and on budget. But more frighteningly, over 40% of them fail absolutely and are never delivered. This is because the core requirements for a successful project – solid up-front analysis and requirements, tight control over requirements changes, and clear coordination of responsibility with accountability, are all things that government tends to be very poor at,
The mystery is why we keep letting them try.
South Korea’s success has been deep but not wide. Almost half of its population lives, works and competes in Seoul. Its occupational structure is also narrow. The number of professions in South Korea is only two-thirds of the number in Japan and only 38% of that in America. This striking statistic is not lost on the South Korean government (few are). It has appointed a task force to foster 500 promising occupations, such as veterinary nurse, chiropractor and private detective.
Tyler Cowen, an economist at George Mason University, once pointed out that America has more than 3,000 halls of fame, honouring everyone from sportsmen to accountants. If people cannot reach the top of one ladder, they climb a different one. In South Korea, by contrast, people share a common definition of success. Everyone is clambering up the same set of rungs, aspiring to the same prizes and fearing similar failures. Those who say they are trying for something else are not quite believed. “People would rather be the tail of a dragon than the head of a snake,” as one journalist put it.
The entire article is interesting, from The Economist.
I was sceptical at the outset, but quickly won over. The toilet and shower unit is exactly the same as my daughter had in her student accommodation and she much preferred it to having to share bathrooms and toilets with other students. Who wouldn’t?
What really excites me about this opportunity is that land that might otherwise lie idle for five years will be brought back into life and used to provide much-needed temporary accommodation for 36 men and women in Brighton and Hove.
…Before embarking on this venture, we spoke with our homeless clients about the concept. They loved it. In particular, they loved the fact residents would have their own kitchen, bathroom and front door. They felt that being self-contained is far more desirable than a room in a shared house even though the floor space, at 26 sq m, is roughly the same as they would have if they were sharing.
…When it was suggested that we house homeless people in steel shipping containers in a scrap metal yard, I thought it was either April Fool’s Day or we had lost all concept of decency.
There is more here. For the pointer I thank a loyal MR reader.
Here is one new report:
The Problem Generator – which is available to all Wolfram Alpha Pro subscribers now – creates random practice questions for students, and Wolfram Alpha then helps them find the answers step-by-step.
Right now, the Generator covers six subjects: arithmetic, number theory, algebra, calculus, linear algebra and statistics.
Here is a 2011 Kurt VanLehn paper (pdf) on human vs. computer systems of tutoring:
This article is a review of experiments comparing the effectiveness of human tutoring, computer tutoring, and no tutoring. “No tutoring” refers to instruction that teaches the same content without tutoring. The computer tutoring systems were divided by their granularity of the user interface interaction into answer-based, step-based, and substep-based tutoring systems. Most intelligent tutoring systems have step-based or substep-based granularities or interaction, whereas most other tutoring systems (often called CAI, CBT, or CAL systems) have answer-based user interfaces. It is widely believed as the granularity of tutoring decreases, the effectiveness increases. In particular, when compared to No tutoring, the effect sizes of answer-based tutoring systems, intelligent tutoring systems, and adult human tutors are believed to be d = 0.3, 1.0, and 2.0 respectively. This review did not confirm these beliefs. Instead, it found that the effect size of human tutoring was much lower: d = 0.79. Moreover, the effect size of intelligent tutoring systems was 0.76, so they are nearly as effective as human tutoring.
One more specific result found in this paper is simply that human tutors very often fail to take advantage of what are supposed to be the advantages of human tutoring, such as flexibility in deciding how to respond to student problems.
By the way, LaunchPad, the new e-portal for our Modern Principles text, contains an excellent adaptive tutoring system.
Here I am interviewed in Tank magazine about my article “An Economic Theory of Avant-Garde and Popular Art, or High and Low Culture,” co-authored with Alex. Excerpt:
EM: Your essay contains one of the most interesting footnotes I’ve ever read: “The interactions between the quantity and subjective quality of art are similar to the interactions analysed by Becker and Lewis (1973) between the quantity and quality of children.”
TC: Becker’s work considered how families might regard “more investment in each child” as a replacement for “having lots of children”, and that is indeed a common substitution as economic development proceeds. Analytically, we can think of artworks as similar to children in this regard. Quality, in the sense of an artist pleasing himself or herself, can substitute for quantity. Syd Barrett perhaps knew he had nowhere left to go, aesthetically. Proust and Cervantes didn’t need to write so many other works, perhaps because they felt satisfied with how thoroughly they expressed their visions through what they did. Balzac took a different course and achieved a different kind of creative satisfaction, yet precisely for that reason he may resonate less with people today than the more idiosyncratic visions of Proust or Cervantes.
The original article you will find here.