solve for equilibrium
In a new NBER paper, Accounting for the Rise in College Tuition, Grey Gordon and Aaron Hedlund create a sophisticated model of the college market and find that a large fraction of the increase in tuition can be explained by increases in subsidies.
With all factors present, net tuition increases from $6,100 to $12,559. As column 4 demonstrates, the demand shocks— which consist mostly of changes in financial aid—account for the lion’s share of the higher tuition. Specifically, with demand shocks alone, equilibrium tuition rises by 102%, almost fully matching the 106% from the benchmark. By contrast, with all factors present except the demand shocks (column 7), net tuition only rises by 16%.
These results accord strongly with the Bennett hypothesis, which asserts that colleges respond to expansions of financial aid by increasing tuition.
Remarkably, so much of the subsidy is translated into higher tuition that enrollment doesn’t increase! What does happen is that students take on more debt, which many of them can’t pay.
In fact, the tuition response completely crowds out any additional enrollment that the financial aid expansion would otherwise induce, resulting instead in an enrollment decline from 33% to 27% in the new equilibrium with only demand shocks. Furthermore, the students who do enroll take out $6,876 in loans compared to $4,663 in the initial steady state….Lastly, the model predicts that demand shocks in isolation generate a surge in the default rate from 17% to 32%. Essentially, demand shocks lead to higher college costs and more debt, and in the absence of higher labor market returns, more loan default inevitably occurs.
Sound familiar? Some of these results appear too large to me and the authors caution that they need to assume a lot of monopoly power to solve their model so the results should be taken as an upper bound. Nevertheless, the Econ 101 insight that subsidies increase prices (even net for those who are not fully subsidized) holds true.
A while ago Scott Sumner laid out at least part of his framework, I thought I should lay out some key parts of mine. Here goes:
1. In world history, 99% of all business cycles are real business cycles. No criticism of RBC can change this fact. Furthermore the propagation mechanism for a “Keynesian business cycle” (arguably a misleading phrase) also relies on RBC theory.
2. In the more recent segment of world history, a lot of cycles have been caused by negative nominal shocks. I consider the Christina and David Romer “shock identification” paper (pdf, and note the name order) to be one of the very best pieces of research in all of macroeconomics. Sometimes central banks tighten when they shouldn’t, and this leads to a recession, due mainly to nominal wage stickiness.
3. Workers are laid off because employers are often (not always) afraid to cut their nominal wages, for fear of busting workplace morale, or in Europe often for legal and union-related reasons.
4. Overall I favor a nominal gdp rule for monetary policy. But most of its gains would come in a few key historical episodes, such as 1929-1932, or 2008-2009. In most periods I don’t think we know what the correct monetary policy should be, nor do we know that it matters. Still, that uncertainty does not militate against an ngdp rule.
5. Once workers are unemployed, nominal wage stickiness is no longer the main reason why they stay unemployed. In fact nominal wage stickiness is largely taken out of the equation because there is no preexisting nominal wage contract for these workers. There may, however, be some residual stickiness due to irrational reservation wages, also known as voluntary unemployment due to stupidity. (You will find a different perspective in Scott’s musical chairs model, which I may cover more soon.)
5b. Monetary stimulus to be effective needs to be applied very early in the job destruction process of a recession. It is much harder to put the pieces back together again, so urgency is of the essence.
6. The successful reemployment of workers depends upon a matching problem, a’la Pissarides, Mortensen, and others. Yet this matching problem is poorly understood, and it can involve a mix of nominal and real imperfections. Sometimes it is solved more quickly than expected, such as in the recent UK experience, and other times more slowly than expected, as in current Spain. Most of the claims you will read about this reemployment of workers are wrong, enslaved to ideology or dogmatism, or at the very least unjustified. Hardly anyone wants to admit this.
7. Really bad recessions involve deficient aggregate demand, negative shocks to intermediation, some chronic supply-side problems, negative wealth effects, and increases in the risk premium, all together. It is hard to find a quick fix. Furthermore models where AS and AD curves are independent and separable are often misleading, despite their analytic convenience.
8. Given that weak AD is only one of the problems in a bad downturn, and that confidence, risk, and supply side problems matter too, the best question to ask about fiscal policy is how well the money is being spent. The “jack up AD no matter” approach is, in the final political equilibrium, not doing good fiscal policy any favors.
9. You should neither rule out nor overstate the relevance of Hayek and Minsky. Their views have much in common, despite the difference in ideological mood affiliation and who — government or the market — gets blamed for the downturn. For really bad recessions, usually both institutions are complicit to say the least.
There is more, but I’ll stop there for now.
1. Questions that are rarely asked: “How many who think we shouldn’t judge schools by how well students do also think we should judge companies by wages?”
2. An economist gets dessert, sort of (apologies for the video when you click on the link).
3. University of Maryland to spin off its data analytics division into a new company. Solve for the equilibrium.
5. Data on Uber’s surge pricing, it keeps the expected wait time within a remarkably consistent range.
7. The smart basketball. Nein, danke, I prefer self-deception to keep me on the exercise track.
One of my web searches turned up a study from Trinity College’s American Religious Identification Survey (ARIS) on the demographics of Mormons. According to the ARIS study, there are now 150 Mormon women for every 100 Mormon men in the state of Utah—a 50 percent oversupply of women.
Solve for the equilibrium, as they say, and please consider as many different variables as possible…
It still seems quite unlikely to me that Trump survives much past Super Tuesday, much less wins anything. Still, he has done far better than virtually anyone expected.
Think of him as a trial balloon. It’s still floating.
We might also see, for the next election cycle, further entry from rich people who mimic the outrageousness of Trump but not the particular ideas. The signal extraction problem from Trump’s continuing float is not yet solved.
In these senses the media is not wrong to focus on him. What he embodies — no matter how you interpret it — is what is new this election cycle. And the multiplicity of possible interpretations make it all the more fodder for the media mill.
When I was last living in Chicago, in the spring 2014, a regular visitor to the department of the University of Chicago and the editor of the Journal of Economic Literature, Steven Durlauf, asked me if I would be interested in writing something for the journal. For many years I had promised Gary Becker that I would write something to help clarify the meaning and role of price theory to my generation of economists, especially those with limited exposure to the Chicago environment, which did so much to shape my approach to economics. With Gary’s passing later that spring, I decided to use this opportunity to follow through on that promise. More than a year later I have posted on SSRN the result.
I have an unusual relationship to “price theory”. As far as I know I am the only economist under 40, with the possible exception of my students, who openly identifies myself as focusing my research on price theory. As a result I am constantly asked what the phrase means. Usually colleagues will follow up with their own proposed definitions. My wife even remembers finding me at our wedding reception in a heated debate not about the meaning of marriage, but of price theory.
The most common definition, which emphasizes the connection to Chicago and to models of price-taking in partial equilibrium, doesn’t describe the work of the many prominent economists today who are closely identified with price theory but who are not at Chicago and study a range of different models. It also falls short of describing work by those like Paul Samuelson who were thought of as working on price theory in their time even by rivals like Milton Friedman. Worst of all it consigns price theory to a particular historical period in economic thought and place, making it less relevant to the future of economics.
I therefore have spent many years searching for a definition that I believe works and in the process have drawn on many sources, especially many conversations with Gary Becker and Kevin Murphy on the topic as well as the philosophy of physics and the methodological ideas of Raj Chetty, Peter Diamond and Jim Heckman among others. This process eventually brought me to my own definition of price theory as analysis that reduces rich (e.g. high-dimensional heterogeneity, many individuals) and often incompletely specified models into ‘prices’ sufficient to characterize approximate solutions to simple (e.g. one-dimensional policy) allocative problems. This approach contrasts both with work that tries to completely solve simple models (e.g. game theory) and empirical work that takes measurement of facts as prior to theory. Unlike other definitions, I argue that mine does a good job connecting the use of price theory across a range of fields of microeconomics from international trade to market design, being consistent across history and suggesting productive directions for future research on the topic.
To illustrate my definition I highlight four distinctive characteristics of price theory that follow from this basic philosophy. First, diagrams in price theory are usually used to illustrate simple solutions to rich models, such as the supply and demand diagram, rather than primitives such as indifference curves or statistical relationships. Second, problem sets in price theory tend to ask students to address some allocative or policy question in a loosely-defined model (does the minimum wage always raise employment under monopsony?), rather than solving out completely a simple model or investigating data. Third, measurement in price theory focuses on simple statistics sufficient to answer allocative questions of interest rather than estimating a complete structural model or building inductively from data. Raj Chetty has described these metrics, often prices or elasticities of some sort, as “sufficient statistics”. Finally, price theory tends to have close connections to thermodynamics and sociology, fields that seek simple summaries of complex systems, rather than more deductive (mathematics), individual-focused (psychology) or inductive (clinical epidemiology and history) fields.
I trace the history of price theory from the early nineteenth to the late twentieth when price theory became segregated at Chicago and against the dominant currents in the rest of the profession. For a quarter century following 1980, most of the profession either focused on more complete and fully-solved models (game theory, general equilibrium theory, mechanism design, etc.) or on causal identification. Price theory therefore survived almost exclusively at Chicago, which prided itself on its distinctive approach, even as the rest of the profession migrated away from it.
This situation could not last, however, because price theory is powerfully complementary with the other traditions. One example is work on optimal redistributive taxation. During the 1980’s and 1990’s large empirical literatures developed on the efficiency losses created by income taxation (the elasticity of labor supply) and on wage inequality. At the same time a rich theory literature developed on very simple models of optimal redistributive income taxation. Yet these two literatures were largely disconnected until the work of Emmanuel Saez and other price theorists showed how measurements by empiricists were closely related to the sufficient statistics that characterize some basic properties of optimal income taxation, such as the best linear income tax or the optimal tax rate on top earners.
Yet this was not the end of the story; these price theoretic stimulated empiricists to measure quantities (such as top income inequality and the elasticity of taxable income) more closely connected to the theory and theorists to propose new mechanisms through which taxes impact efficiency which are not summarized correctly by these formulas. This has created a rich and highly productive dialog between price theoretic summaries, empirical measurement of these summaries and more simplistic models that suggest new mechanisms left out of these summaries.
A similar process has occurred in many other fields of microeconomics in the last decade, through the work of, among others, five of the last seven winners of the John Bates Clark medal. Liran Einav and Amy Finkelstein have led this process for the economics of asymmetric information and insurance markets; Raj Chetty for behavioral economics and optimal social insurance; Matt Gentzkow for strategic communication; Costas Arkolakis, Arnaud Costinot and Andrés Rodriguez-Clare in international trade; and Jeremy Bulow and Jon Levin for auction and market design. This important work has shown what a central and complementary tool price theory is in tying together work throughout microeconomics.
Yet the formal tools underlying these price theoretic approximations and summaries have been much less fully developed than have been analytic tools in other areas of economics. When does adding up “consumer surplus” across individuals lead to accurate measurements of social welfare? How much error is created by assumptions of price-taking in the new contexts, like college admissions or voting, to which they are being applied? I highlight some exciting areas for further development of such approximation tools complementary to the burgeoning price theory literature.
Given the broad sweep of this piece, it will likely touch on the interests of many readers of this blog, especially those with a Chicago connection. Your comments are therefore very welcome. If you have any, please email me at email@example.com.
Stefan Homburg has a new paper on this topic. I don’t quite get how the model hands together, but still the effort alone strikes me as very real progress in this area:
Japan has been in a benign liquidity trap since 1990. In a benign liquidity trap, interest rates approach zero, prices decline, and monetary policy is ineffective but output and employment perform decently. Such a pattern contradicts traditional macro theories. This paper introduces a monetary general equilibrium model that is compatible with Japan´s performance and resolves puzzles associated with liquidity traps. Possible conclusions for Anglo-Saxon countries and eurozone members are also discussed.
“At least half of Germans, French and Italians say their country should not use military force to defend a NATO ally if attacked by Russia,” the Pew Research Center said it found in its survey, which is based on interviews in 10 nations.
There is more here, and so every great moderation must come to an end…
This is also of note:
According to the study, residents of most NATO countries still believe that the United States would come to their defense.
Eighty-eight percent of Russians said they had confidence in Mr. Putin to do the right thing on international affairs…
Solve for the equilibrium, as they like to say. It is much easier to stabilize a conservative power (e.g., the USSR) than a revisionist power (Putin’s Russia).
It is also worth thinking about how this entire state of affairs has come to pass.
Here is a long and excellent post, whereby Robin outs himself as a strange kind of environmentalist. Do need the whole thing, but here is one summary excerpt:
So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.
“At first they came for the rabbits…and then they came for me.” I find that intriguing, but I have a more marginalist approach, and perhaps one which encompasses Robin’s hypothesis as a special case. The death of human (and other) civilizations may be a bit like the death of the human body through old age, namely a whole bunch of things go wrong at once. If there were a single key problem, it would be easier to find a patch and prolong things for just a bit more. But if we have reason to believe that, eventually, many things will go wrong at once…such a concatenation of problems is more likely to defeat us. So my nomination for The Great Filter, in a nutshell, is “everything going wrong at once.” The simplest underlying model here is that a) problems accumulate, b) resources can be directed to help solve problems, and c) sometimes problems accumulate more rapidly than they can be solved.
This is also why, in many cases, there is no simple “fact of the matter” answer as to why various mighty empires fell in the past. Here is my earlier review of Apocalypto, a remarkable and still underrated movie.
We have shown that in the model with capital, the presence of productive assets carrying a positive marginal product does not eliminate the possibility of a secular stagnation. The key assumption is that capital has a strictly positive rate of depreciation. In the absence of depreciation, capital can serve as a perfect storage technology which places a zero bound on the real interest rate. It is straightforward to introduce other type of assets, such as land used for production, and maintain a secular stagnation equilibrium. For these extensions, however, it is important to ensure that the asset cannot operate as a perfect storage technology as this may put a zero bound on the real interest rate.
Let me recapitulate the basic problem. Secular stagnation models are supposed to exhibit persistent negative real rates of return, but how is this compatible with economic growth and positive investment? Just hold onto stuff if need be and of course the goverment can help you do this with safe assets, if need be. The earlier models had no capital, which ruled out this possibility. The new model assumes storage costs for capital are fairly high, or alternatively the depreciation rate for capital is high. Since you can’t sit on your wealth, you might as well invest it at negative real rates of return.
But at the margin, storage costs for goods (and some capital) are not that high. My cupboard is full of beans and cumin seed, but I eat the stuff only slowly. In the meantime it is hardly a burden, nor is it risky since I know it will be tasty once I make the right brew. Art has negative storage costs (for the marginal buyer it is fun to look at), although its risk admittedly makes this a more complicated example. Advances in logistics, and the success of Amazon, show that storage costs are getting lower all the time.
Secular stagnation might be a good model for Liberia and Venezuela and Mad Max, but not for the United States today or other growing economies with forward momentum. But a credible stagnation model for America needs to recognize that rates of return will be lower than usual but not negative in real terms. And there won’t be a long-run shortfall of demand because eventually market prices will adjust so that demand meets the supply we have. That is a supply-side stagnation model of the sort promoted by myself, Robert Gordon, Peter Thiel, Michael Mandel, and others. In the secular stagnation model as it is now being discussed by Keynesian macroeconomists, you end up twisting yourself in knots to force that real rate of return into permanently negative territory. Of course if you allow the real rate of return to be positive albeit low, the economy is not stuck in a perpetual liquidity trap as people move out of cash into investment assets. The demand-side stagnation mechanisms fade away into irrelevance once prices have some time to adjust.
Izabella Kaminska comments here. Josh Hendrickson has a very good blog post on the model here. I’ve already cited Stephen Williamson here, he notes the model is really about a credit friction and would be remedied with a greater supply of safe assets for savings, an easy enough problem to solve, for instance try the Bush tax cuts. Here is Ryan Decker on the model, and here is Ryan arguing that investment is aggregate demand also and many of us seem to have forgotten that, a very good post.
This is an important and interesting paper, but only because it shows the model doesn’t really hold and requires such contortions. The discussion of policy results is premature and way off the mark. The authors should have included sentences like “storage costs aren’t very high, and the economy as a whole does not exhibit negative real rates of return, so these policy conclusions are not actual recommendations.”
Although not central to his work, one of my favorite papers of today’s Nobel prize winner, Jean Tirole, is Extrinsic and Intrinsic Motivation (written with Roland Benabou). In this paper, Tirole and Benabou try to resolve the economist’s intuition that incentives motivate with the idea from psychology that incentive schemes can sometimes demotivate. The psychologists argue that extrinsic motivation can reduce intrinsic motivation (but they are not at all clear on why this should be the case). Tirole and Benabou try to produce a similar finding by arguing that in addition to providing motivation an incentive scheme gives the agent, the one being incentivized, some information and the information may undermine the motivation.
For example, if I tell my son. “If you get an A in math, I will give you $1000.” What does my son conclude?
- My father must think math is very important for my future to offer me $1000. My father is smart. I will work hard.
This is the message that I hope to send. But my son knows that I know something about math and also that I know something about him and he may use this knowledge to make a very different inference.
- If my father thinks I need $1000 to get an A, math must be very hard or I must lack talent. I will work for an A this year but next year I should probably not sign up for advanced math classes.
Or perhaps he infers
- If my father is offering me $1000 to do the right thing , he must not trust my judgment.
- My father is trying to use his money to control me. I rebel!
Thus reward has two effects a pure incentive effect (holding information constant) and an inference effect. Notice that the inference effect depends on the context. Thus, without knowing the context–how the father gets along with the son and their history of interaction–we can’t know what the effect of the “incentive” will be. Thus I have argued that “an incentive is not an objective fact but a subjective interpretation.”
I’m not convinced that Tirole and Benabou have the right answer on intrinsic and extrinsic motivation but thus and other papers indicate Tirole’s broadness of thought and his characteristic approach to issues.
By the way, working out the equilibrium in these games is not at all easy because the principal knows the agent will infer information about the characteristic from the reward structure and the agent knows the principal knows that the agent will infer information about the characteristic from the reward structure and so on – thus we have a Moriarty problem and must look for conditions such that there can be an equilibrium in which everything is common knowledge. But heh, if these problem were easy you wouldn’t get a Nobel prize for solving them!
See Tyler’s post and my other posts below for much more on Tirole.
1. Maps of cultural centers, a new research tool, fun too.
8. Serenading the cattle with my trombone (music video).
Krusell and Smith lay out the Solow and Piketty growth models very nicely but perhaps not in a way that is immediately transparent if you are not already familiar with growth models. Thus, in this note I want to lay out the differences using the Super Simple Solow model that Tyler and I developed in our textbook. The Super Simple Solow model has no labor growth and no technological growth. Investment, I, is equal to a constant fraction of output, Y, written I=sY.
Capital depreciates–machines break, tools rust, roads develop potholes. We write D(epreciation)=dK where d is the rate of depreciation and K is the capital stock.
Now the model is very simple. If I>D then capital accumulates and the economy grows. If I<D then the economy shrinks. Steady state is when I=D, i.e. when we are investing just enough each period to repair and maintain the existing capital stock.
Steady state is thus when sY=dK so we can solve for the steady state ratio of capital to output as K/Y=s/d. I told you it was simple.
Now let’s go to Piketty’s model which defines output and savings in a non-standard way (net of depreciation) but when written in the standard way Piketty’s saving assumption is that I=dK + s(Y-dK). What this means is that people look around and they see a bunch of potholes and before consuming or doing anything else they fill the potholes, that’s dK. (If you have driven around the United States recently you may already be questioning Piketty’s assumption.) After the potholes have been filled people save in addition a constant proportion of the remaining output, s(Y-dk), where s is now the Piketty savings rate.
Steady state is found exactly as before, when I=D, i.e. dK+s(Y-dK)=dK or sY=sdK which gives us the steady level of capital to output of K/Y=s/(s d).
Now we have two similar looking expressions for K/Y, namely s/d for Solow and s/(s d) for Piketty. We can’t yet test which is correct because nothing requires that the two savings rates be the same. To get further suppose that we now allow Y to grow at rate g holding K constant, that is over time because of better technology we get more Y per unit of K. Since Y will be larger the intuition is that the equilibrium K/Y ratio will be lower, holding all else the same. And indeed when you run through the math (hand waving here) you get expressions for the Solow and Piketty K/Y ratios of s/(g+d) and s/(g+sd) respectively, i.e. a simple addition of g to the denominator in both cases (again bear in mind that the two s’s are different.)
We can now see what the models predict when g changes–this is a key question because Piketty argues that a fall in g (which he predicts) will greatly increase K/Y. Here is a table showing how K/Y changes with g in the two models. I assume for both models that d=.05, for Solow I have assumed s=.3 and for Piketty I have calibrated so that the two models produce the same K/Y ratio of 3.75 when g=.03 this gives us a Piketty s=.138.
As g falls Piketty predicts a much bigger increase in the K/Y ratio than does Solow. In Piketty’s model as g falls from .03 to .01 the capital to output ratio more than doubles! In the Solow model, in contrast, the capital to output ratio increases by only a third. Remember that in Piketty it’s the higher capital stock plus a more or less constant r that generates the massive increase in income inequality from capital that he is predicting. Thus, the savings assumption is critical.
I’ve already suggested one reason why Piketty’s saving assumption seems too strong–Piketty’s assumption amounts to a very strong belief that we will always replace depreciating capital first. Another way to see this is to ask where does the extra capital come from in the Piketty model compared to Solow? Well the flip side is that Solow predicts more consumption than Piketty does. In fact, as g falls in the Piketty model so does the consumption to output ratio. In short, to get Piketty’s behavior in the Solow model we would need the Solow savings rate to increase as growth falls.
Krusell and Smith take this analysis a few steps further by showing that Piketty’s assumptions about s are not consistent with standard maximizing behavior (i.e. in a model in which s is allowed to vary to maximize utility) nor do they appear consistent with US data over the last 50 years. Neither test is definitive but both indicate that to accept the Piketty model you have to abandon Solow and place some pretty big bets on a non-standard assumption about savings behavior.
Here is a potential new development:
Verizon is adding more antennas to its network, forming smaller wireless cells with stronger coverage and rolling out service on new segments of the wireless spectrum, the digital equivalent of opening new lanes for traffic. Sprint is introducing a service called Sprint Spark that increases access speeds if customers have devices that can use multiple wireless frequencies at once.
If pCell works as promised, Mr. Perlman’s technology could result in much bigger gains in wireless speeds. In traditional cellular networks, antennas placed around a city transmit wireless signals to all of the mobile devices within their area. As more people enter an area, they share the wireless network with everyone else there, resulting in slower speeds. Wireless carriers cannot simply solve the problem by putting antennas everywhere because their signals can be disrupted if they are too close together.
With a network of pCell antennas, someone with a mobile device will get access to the full wireless data speed in the area, regardless of how many other people are sharing that network, Mr. Perlman said.
There is also this:
The plan is to bring Google Fiber to 34 cities and see how that goes.
I do not feel I can judge the prospects for these developments. The point, however, is this. Improving connectivity is an extremely dynamic market sector. A high mark-up on cable internet connectivity, as might be applied by say a Comcast monopolist, is also creating a “prize” for further innovation in the sector. Admittedly one does not prefer to have this prize funded by deadweight loss (less broadband consumption) but virtually all prizes are funded by deadweight loss in some manner rather than by lump sum taxation.
When people claim “the current mark-up is too high,” that is an entirely reasonable stance. But when you rewrite it as “the current innovation prize, funded out of deadweight loss” is too high, that reframing brings some clarity, some moderation, and I think also induces some more agnosticism about the costs of the current semi-monopoly. For similar reasons I don’t worry about monopoly in the eBook market and the like and there the case for simply ignoring the problem is much stronger because the options and cheapness have exploded so radically and so quickly.
So I don’t see the current cable semi-monopoly as lasting that long. And its current cost cannot be that much higher than the cost of sending a disc in the mail, otherwise the disc would be sent. Alternatively, most communities have public libraries which offer pretty good and pretty free internet connections, including video streaming of course. If the argument is simply “this prize, for future connectivity innovation, cannot be funded from deadweight loss because it means that in the meantime some poorer people will have to wait to get their discs in the mail and make too many trips to the public library”…well, I guess I’m not that impressed this is a major public policy problem.
The longer and more you regulate cable prices, the longer it will take this sector to reach a more competitive equilibrium.
By the way, dear reader, I am not clever enough to use Netflix streaming, as I find the TV menu confusing. So I still get the discs in the mail.
The ultimate potential of precision gene-editing techniques is beginning to be realised. Today, researchers in China report the first monkeys engineered with targeted mutations, an achievement that could be a stepping stone to making more realistic research models of human diseases.
Xingxu Huang, a geneticist at the Model Animal Research Center of Nanjing University in China, and his colleagues successfully engineered twin cynomolgus monkeys (Macaca fascicularis) with two targeted mutations using the CRISPR/Cas9 system — a technology that has taken the field of genetic engineering by storm in the past year. Researchers have leveraged the technique to disrupt genes in mice and rats, but until now none had succeeded in primates.
For the pointer I thank @autismcrisis.