Results for “model this” 2739 found
Economists survey epidemiological models
The authors are Christopher Avery, William Bossert, Adam Clark, Glenn Ellison, Sara Fisher Ellison, the paper is very good but the abstract is uninformative. Here is one excerpt:
A notable shortcoming of the basic SIR model is that it does not allow for heterogeneity in state frequencies and rate constants. We discuss several different sources of heterogeneity in more detail in Section 2.
The most important and challenging heterogeneity in practice is that individual behavior varies over time. In particular, the spread of disease likely induces individuals to make private decisions to limit contacts with other people. Thus, estimates from scenarios that assume unchecked exponential spread of disease, such as the reported figures from the Imperial College model of 500,000 deaths in the UK and 2.2 million in the United States, do not correspond to the behavioral responses one expects in practice. Further, these gradual increases in “social-distancing” that can be expected over the courses of an epidemic change dynamics in a continuous fashion and thus blur the distinctions between mechanistic and phenomenological models.13 Each type of model can be reasonably well calibrated to an initial period of spread of disease, but further assumptions, often necessarily ad hoc in nature, are needed to extend either type of model to later phases of an epidemic.
I recommend the whole paper.
The Jesús Fernández-Villaverde-Chad Jones epidemiological model
Here are the slides, definitely recommended. Might this be my favorite epidemiological model so far?
I interpret the last few slides as being gloomy for some “star early performers,” including California, though you should not necessarily attribute that view to the authors.
The O-Ring Model of Development
Michael Kremer’s Nobel prize (with Duflo and Banerjee) reminded me of his important paper The O-Ring Theory of Development. I also rewatched my video on this paper from Tyler’s and my online class, Development Economics. This was from our powerpoint and iPad days so there are no fancy graphics but the video holds up! Mostly because it’s a great model with lots of interesting implications not just for development but also for the structure of the US economy. See also Jason Collins on Garett Jones’s extension of the model.
Watch out for the weakies!: the O-Ring model in scientific research
Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.
That is part of the abstract of a new paper by Mohammad Ahmadpoor and Benjamin F. Jones.
A carbon tax in a Hotelling model
It is rare that anyone wishes to broach this general topic, on either side of the debate. This is from a new working paper by Geoffrey Heal and Wolfam Schlenker:
We highlight important dynamic aspects of a global carbon tax, which will reallocate consumption through time: some of the initial reduction in consumption will be offset through higher consumption later on. Only reserves with high enough extraction cost will be priced out of the market. Using data from a large proprietary database of field-level oil data, we show that carbon prices even as high as 200 dollars per ton of CO2 will only reduce cumulative emissions from oil by 4% as the supply curve is very steep for high oil prices and few reserves drop out. The supply curve flattens out for lower price, and the effect of an increased carbon tax becomes larger. For example, a carbon price of 600 dollars would reduce cumulative emissions by 60%. On the flip side, a global cap and trade system that limits global extraction by a modest amount like 4% expropriates a large fraction of scarcity rents and would imply a high permit price of $200. The tax incidence varies over time: initially, about 75% of the carbon price will be passed on to consumers, but this share declines through time and even becomes negative as oil prices will drop in future years relative to a case of no carbon tax. The net present value of producer and consumer surplus decrease by roughly equal amounts, which are almost entirely offset by increased tax revenues.
Here is an earlier MR post on the same topic, and it gives more of the theoretical intuition.
A simple model of Kawhi Leonard’s indecision
As a free agent, he is being courted by his current team, the Toronto Raptors, as well as the Los Angeles Clippers and the Los Angeles Lakers (now the team of LeBron James). And the internet is making jokes about him taking so much time for the decision. In Toronto, helicopters are following him around.
Due to the salary cap and related regulations, there is no uncertainty about how much money each team can offer. The offer that can vary the most in overall quality, however, is the one from the Los Angeles Lakers. For instance, if Kawhi is playing in Los Angeles with LeBron James, he might receive more endorsements and movie contracts (or not). If he is waiting on the decision at all, that is a sign he is at least sampling the Laker option, and seeing how much extra off-court value it can bring him. So the existence of some waiting favors the chance he goes to the Lakers. That said, if he is waiting a long time to see how good the Laker option is, that is a sign the Laker option is not obviously crossing a threshold and thus he might stay with Toronto.
Some implications of monopsony models
More workers ought to be in larger firms, as those firms are afraid to hire more, knowing that bids up wages for everyone. Therefore (ceteris paribus) the large firms in the economy ought to be larger.
Raising the legal minimum wage also reallocates workers into larger firms, and again makes them larger.
Tough stuff if you worry a lot about both monopoly and monopsony at the same time — choose your poison!
I have adapted those points from a recent paper by David Berger, Kyle Herkenhoff, and Simon Mongey, “Labor Market Power.” On the empirics, they conclude: “Our theory implies that this declining labor market concentration increase labor’s share of income by 2.89 percentage points between 1976 and 2014, suggesting that labor market concentration is not the reason for a declining labor share.” So the paper makes no one happy (good!): monopsony is significant, but has been declining in import.
Robert Pindyck on climate change models
Pindyck, from MIT, is a leading expert in this area, here is part of his summary conclusion:
It would certainly be nice if the problems with IAMs [integrated assessment models] simply boiled down to an imprecise knowledge of certain parameters, because then uncertainty could be handled by assigning probability distributions to those parameters and then running Monte Carlo simulations. Unfortunately, not only do we not know the correct probability distributions that should be applied to these parameters, we don’t even know the correct equations to which those parameters apply. Thus the best one can do at this point is to conduct a simple sensitivity analysis on key parameters, which would be more informative and transparent than a Monte Carlo simulation using ad hoc probability distributions. This does not mean that IAMs are of no use. As I discussed earlier, IAMs can be valuable as analytical and pedagogical devices to help us better understand climate dynamics and climate–economy interactions, as well as some of the uncertainties involved. But it is crucial that we are clear and up-front about the limitations of these models so that they are not misused or oversold to policymakers. Likewise, the limitations of IAMs do not imply that we have to throw up our hands and give up entirely on estimating the SCC [social costs of carbon] and analyzing climate change policy more generally.
The entire essay is of interest, via Matt Kahn.
A simple question about the signaling model of education
Let’s say, for purposes of argument, that education is 100% signaling, and furthermore let’s assume that the underlying traits of IQ, conscientiousness, and so on are not changing in the population over the relevant period of time.
Now consider a situation where income inequality is rising, at least in the early years of jobs. Since employers cannot discern worker quality — other than by observing the signal that is — this should imply that getting an education is “more separating” than it used to be.
That in turn has to mean that an education is more rigorous than it used to be. No, not “getting in” (employers could hire their own admissions officers), I mean getting through. Finishing successfully is more of a mark of quality than it used to be, because finishing is harder. Finishing is harder because there is more rigor.
Is this true?
Models as indexing, and the value of Google
There are many arguments for the use of models in economics, including notions of rigor and transparency, or that models can help you to see relationships you otherwise might not have expected. I don’t wish to gainsay those, but I thought of another argument yesterday. Models are a way of indexing your thoughts. A model can tell you which are the core features of your argument and force you to give them names. You then can use those names to find what others have written about your topic and your mechanisms. In essence, you are expanding the division of labor in science more effectively by using models.
This mechanism of course requires that models are a more efficient means of indexing thoughts than pure words or propositions alone. In this view, it is often topic names or book indexes or card catalogs that models are competing with, not verbal economics per se.
The existence of Google therefore may have lowered the relative return to models. First, Google searches by words best of all. Second and relatedly, if you have written only words Google will help you find the related work you need, scholar.google.com kicks in too. In essence, there is a new and very powerful way of finding related ideas, and you need not rely on the communities that get built around particular models (though those communities largely will continue).
It is notable that open access, on-line economics writing doesn’t use models very much and is mostly content to rely on words and propositions. There are several reasons for this, but this productivity shock to differing methods of indexing may be one factor.
Still, it is not always easy to search by words. Many phrases — consider say “free will” — do not through search engines discriminate very well on the basis of IQ or rigor.
Further reasons why the Mundell-Fleming model is simply, flat-out wrong
Most trade is invoiced in very few currencies. Despite this, the Mundell-Fleming benchmark and its variants focus on pricing in the producer’s currency or in local currency. We model instead a ‘dominant currency paradigm’ for small open economies characterized by three features: pricing in a dominant currency; pricing complementarities, and imported input use in production. Under this paradigm: (a) terms of trade are stable; (b) dominant currency exchange rate pass-through into export and import prices is high regardless of destination or origin of goods; (c) exchange rate pass-through of non-dominant currencies is small; (d) expenditure switching occurs mostly via imports and export expansions following depreciations are weak. Using merged firm level and customs data from Colombia we document strong support for the dominant currency paradigm and reject the alternatives of producer currency and local currency pricing.
That is from a new NBER working paper by Casas, Díez, Gopinath, and Gourinchas. Here are my previous posts on Mundell-Fleming.
A simple and plausible model of some of the major stock market anomalies
The paper is entitled Sticky Expectations and Stock Market Anomalies, and it is by Jean-Philippe Bouchaud, Philipp Krueger, Augustin Landier, and David Thesmar, here is the abstract:
We propose a simple model in which investors price a stock using a persistent signal and sticky belief dynamics à la Coibion and Gorodnichenko (2012). In this model, returns can be forecasted using (1) past profits, (2) past change in profits, and (3) past returns. The model thus provides a joint theory of two of the most economically significant anomalies, i.e. quality and momentum. According to the model, these anomalies should be correlated, and be stronger when signal persistence is higher, or when earnings expectations are stickier. Using I/B/E/S data, we measure expectation stickiness at the analyst level. We find that analysts are on average sticky and, consistent with a limited attention hypothesis, more so when they cover more industries. We then find strong support for the model’s prediction in the data: both the momentum and the quality anomaly are stronger for stocks with more persistent profits, and for stocks which are followed by stickier analysts. Consistently with the model, both strategies also comove significantly.
For the pointer I thank the excellent Samir Varma.
The Solow Model and Ideas
The fifth video in the Solow series from our Principles of Macroeconomics course is really the capstone. It explains how ideas drive growth on the cutting edge. A key insight of the model, however–one which many people still don’t really get–is that ideas increase output and by doing so they also drive capital accumulation so both forces are always at play.
The Solow Model Animated!
Modern Principles of Economics was the first principles textbook to make the Solow model of economic growth easily accessible to undergraduates. By focusing on simple mathematics that the students already know, like the square root function, we made the Solow model easy to understand without losing the power of the model to explain the world.
Modern Principles is the only textbook with the Super Simple Solow model! And now we’ve brought the model to life with a series of fun videos in our Principles of Macroeconomics class at MRUniversity. You’ve never seen the Solow model taught like this!
Introduction to the Solow Model introduces the questions and the “characters” that drive the story. Physical capital and diminishing returns explains the idea of a production function and diminishing returns. We then introduce capital depreciation and focus in on the most important idea for understanding the Solow model, the steady state:
I’ll cover some more videos in the Solow series later this week.
Working as a model for stock photography — what’s it like?
I very much liked this Jonathan Kay piece, which has so many good, interesting, and separate points, here is one of them:
“One of the most important elements of the Shutterstock quality-control process is to ensure there are no logos or other brand identifiers,” she told me. “Nor can the photos contain identifiable people or locations which haven’t released their legal rights.” The blackouts here can be extremely broad, and include some of the most famous landmarks on the planet. You can’t include the Eiffel Tower in most forms of stock photography, for instance. Nor can you include anyone wearing the iconic beige-and-blue Burberry pattern. Even a tiny patch of it in the background renders an image completely unusable.
And this:
Click through the Shutterstock database, and you find that professionally shot and curated stock photos invariably exhibit what might be called calculated soullessness. The subjects project human emotions—happy, sad, confused, angry—but in a simple, one-dimensional way. There should be nothing bespeaking a complex inner life. Real human interest always will distract the audience from the intended product or idea.
In closing:
How does a photographer achieve authenticity in an age where authentic culture increasingly is built around irony? More broadly: Is the project of organizing human experience into databases of generic happy faces and sad faces still relevant to us in 2016?
Alas, I can no longer remember to whom I owe the pointer, my apologies.
File under Those New Service Sector Jobs. And if that doesn’t suit you, here is “Calling all ‘bulky’ Alec Baldwin lookalikes”.