Results for “model this”
3162 found

An econometrician on the SEIRD epidemiological model for Covid-19

There is a new paper by Ivan Korolev:

This paper studies the SEIRD epidemic model for COVID-19. First, I show that the model is poorly identified from the observed number of deaths and confirmed cases. There are many sets of parameters that are observationally equivalent in the short run but lead to markedly different long run forecasts. Second, I demonstrate using the data from Iceland that auxiliary information from random tests can be used to calibrate the initial parameters of the model and reduce the range of possible forecasts about the future number of deaths. Finally, I show that the basic reproduction number R0 can be identified from the data, conditional on the clinical parameters. I then estimate it for the US and several other countries, allowing for possible underreporting of the number of cases. The resulting estimates of R0 are heterogeneous across countries: they are 2-3 times higher for Western countries than for Asian countries. I demonstrate that if one fails to take underreporting into account and estimates R0 from the cases data, the resulting estimate of R0 will be biased downward and the model will fail to fit the observed data.

Here is the full paper.  And here is Ivan’s brief supplemental note on CFR.  (By the way, here is a new and related Anthony Atkeson paper on estimating the fatality rate.)

And here is a further paper on the IMHE model, by statisticians from CTDS, Northwestern University and the University of Texas, excerpt from the opener:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

Again, I am very happy to present counter evidence to these arguments.  I readily admit this is outside my area of expertise, but I have read through the paper and it is not much more than a few pages of recording numbers and comparing them to the actual outcomes (you will note the model predicts New York fairly well, and thus the predictions are of a “train wreck” nature).

Let me just repeat the two central findings again:

  • In excess of 70% of US states had actual death rates falling outside the 95% prediction interval for that state, (see Figure 1)
  • The ability of the model to make accurate predictions decreases with increasing amount of data. (figure 2)

So now really is the time to be asking tough questions about epidemiology, and yes, epidemiologists.  I would very gladly publish and “signal boost” the best positive response possible.

And just to be clear (again), I fully support current lockdown efforts (best choice until we have more data and also a better theory), I don’t want Fauci to be fired, and I don’t think economists are necessarily better forecasters.  I do feel I am not getting straight answers.

What is the best model for thinking about this lack of higher ed reporting?

The Education Department opened investigations into Harvard and Yale as part of a continuing review that it says has found U.S. universities failed to report at least $6.5 billion in foreign funding from countries such as China and Saudi Arabia, according to department materials viewed by The Wall Street Journal…

The department described higher-education institutions in the U.S., in a document viewed by the Journal, as “multi-billion dollar, multi-national enterprises using opaque foundations, foreign campuses, and other sophisticated legal structures to generate revenue.”

…Universities are required to disclose to the Education Department all contracts and gifts from a foreign source that, alone or combined, are worth $250,000 or more in a calendar year. Though the statute is decades old, the department only recently began to vigorously enforce it.

Officials accused schools of actively soliciting money from foreign governments, companies and nationals known to be hostile to the U.S. and potentially in search of opportunities to steal research and “spread propaganda benefitting foreign governments,” according to the document.

In addition, while the department said it has found foreign money generally flows to the country’s richest universities, “such money apparently does not reduce or otherwise offset American students’ tuition costs,” the document said.

Here is the full WSJ story.

The culture that is Buckingham: what is the underlying structural model?

The Duke of Cambridge has spoken of his “sadness” at the broken bond with his brother and voiced sorrow that the royal family is no longer a “team”.

As the Queen called emergency peace talks tomorrow at Sandringham to end the Windsors’ civil war, The Sunday Times can reveal that Prince William has said he feels sorrow that he and Prince Harry are now “separate entities” and expressed hope that they might pull together again in future.

“I’ve put my arm around my brother all our lives and I can’t do that any more; we’re separate entities,” he told a friend.

…Tom Bradby, who did the recent ITV interview in which Harry and Meghan confessed their sense of isolation, warned failure to keep the pair on side could lead the Duke and Duchess of Sussex to do a “no-holds-barred” interview that could damage the monarchy further.

…Harry and Meghan may have their security downgraded, with protection squad officers armed only with Tasers rather than guns.

Here is the full piece from the London TimesSome reports say Meghan now has a deal with Disney, maybe she will do voice-over for a princess…

The O-Ring Model of Development

Michael Kremer’s Nobel prize (with Duflo and Banerjee) reminded me of his important paper The O-Ring Theory of Development. I also rewatched my video on this paper from Tyler’s and my online class, Development Economics. This was from our powerpoint and iPad days so there are no fancy graphics but the video holds up! Mostly because it’s a great model with lots of interesting implications not just for development but also for the structure of the US economy. See also Jason Collins on Garett Jones’s extension of the model.

Watch out for the weakies!: the O-Ring model in scientific research

Team impact is predicted more by the lower-citation rather than the higher-citation team members, typically centering near the harmonic average of the individual citation indices. Consistent with this finding, teams tend to assemble among individuals with similar citation impact in all fields of science and patenting. In assessing individuals, our index, which accounts for each coauthor, is shown to have substantial advantages over existing measures. First, it more accurately predicts out-of-sample paper and patent outcomes. Second, it more accurately characterizes which scholars are elected to the National Academy of Sciences. Overall, the methodology uncovers universal regularities that inform team organization while also providing a tool for individual evaluation in the team production era.

That is part of the abstract of a new paper by Mohammad Ahmadpoor and Benjamin F. Jones.

A carbon tax in a Hotelling model

It is rare that anyone wishes to broach this general topic, on either side of the debate.  This is from a new working paper by Geoffrey Heal and Wolfam Schlenker:

We highlight important dynamic aspects of a global carbon tax, which will reallocate consumption through time: some of the initial reduction in consumption will be offset through higher consumption later on. Only reserves with high enough extraction cost will be priced out of the market. Using data from a large proprietary database of field-level oil data, we show that carbon prices even as high as 200 dollars per ton of CO2 will only reduce cumulative emissions from oil by 4% as the supply curve is very steep for high oil prices and few reserves drop out. The supply curve flattens out for lower price, and the effect of an increased carbon tax becomes larger. For example, a carbon price of 600 dollars would reduce cumulative emissions by 60%. On the flip side, a global cap and trade system that limits global extraction by a modest amount like 4% expropriates a large fraction of scarcity rents and would imply a high permit price of $200. The tax incidence varies over time: initially, about 75% of the carbon price will be passed on to consumers, but this share declines through time and even becomes negative as oil prices will drop in future years relative to a case of no carbon tax. The net present value of producer and consumer surplus decrease by roughly equal amounts, which are almost entirely offset by increased tax revenues.

Here is an earlier MR post on the same topic, and it gives more of the theoretical intuition.

A simple model of Kawhi Leonard’s indecision

As a free agent, he is being courted by his current team, the Toronto Raptors, as well as the Los Angeles Clippers and the Los Angeles Lakers (now the team of LeBron James). And the internet is making jokes about him taking so much time for the decision.  In Toronto, helicopters are following him around.

Due to the salary cap and related regulations, there is no uncertainty about how much money each team can offer.  The offer that can vary the most in overall quality, however, is the one from the Los Angeles Lakers.  For instance, if Kawhi is playing in Los Angeles with LeBron James, he might receive more endorsements and movie contracts (or not).  If he is waiting on the decision at all, that is a sign he is at least sampling the Laker option, and seeing how much extra off-court value it can bring him.  So the existence of some waiting favors the chance he goes to the Lakers.  That said, if he is waiting a long time to see how good the Laker option is, that is a sign the Laker option is not obviously crossing a threshold and thus he might stay with Toronto.

Some implications of monopsony models

More workers ought to be in larger firms, as those firms are afraid to hire more, knowing that bids up wages for everyone.  Therefore (ceteris paribus) the large firms in the economy ought to be larger.

Raising the legal minimum wage also reallocates workers into larger firms, and again makes them larger.

Tough stuff if you worry a lot about both monopoly and monopsony at the same time — choose your poison!

I have adapted those points from a recent paper by David Berger, Kyle Herkenhoff, and Simon Mongey, “Labor Market Power.”  On the empirics, they conclude: “Our theory implies that this declining labor market concentration increase labor’s share of income by 2.89 percentage points between 1976 and 2014, suggesting that labor market concentration is not the reason for a declining labor share.”  So the paper makes no one happy (good!): monopsony is significant, but has been declining in import.

Robert Pindyck on climate change models

Pindyck, from MIT, is a leading expert in this area, here is part of his summary conclusion:

It would certainly be nice if the problems with IAMs [integrated assessment models] simply boiled down to an imprecise knowledge of certain parameters, because then uncertainty could be handled by assigning probability distributions to those parameters and then running Monte Carlo simulations. Unfortunately, not only do we not know the correct probability distributions that should be applied to these parameters, we don’t even know the correct equations to which those parameters apply. Thus the best one can do at this point is to conduct a simple sensitivity analysis on key parameters, which would be more informative and transparent than a Monte Carlo simulation using ad hoc probability distributions. This does not mean that IAMs are of no use. As I discussed earlier, IAMs can be valuable as analytical and pedagogical devices to help us better understand climate dynamics and climate–economy interactions, as well as some of the uncertainties involved. But it is crucial that we are clear and up-front about the limitations of these models so that they are not misused or oversold to policymakers. Likewise, the limitations of IAMs do not imply that we have to throw up our hands and give up entirely on estimating the SCC [social costs of carbon] and analyzing climate change policy more generally.

The entire essay is of interest, via Matt Kahn.

A simple question about the signaling model of education

Let’s say, for purposes of argument, that education is 100% signaling, and furthermore let’s assume that the underlying traits of IQ, conscientiousness, and so on are not changing in the population over the relevant period of time.

Now consider a situation where income inequality is rising, at least in the early years of jobs.  Since employers cannot discern worker quality — other than by observing the signal that is — this should imply that getting an education is “more separating” than it used to be.

That in turn has to mean that an education is more rigorous than it used to be.  No, not “getting in” (employers could hire their own admissions officers), I mean getting through.  Finishing successfully is more of a mark of quality than it used to be, because finishing is harder.  Finishing is harder because there is more rigor.

Is this true?

Models as indexing, and the value of Google

There are many arguments for the use of models in economics, including notions of rigor and transparency, or that models can help you to see relationships you otherwise might not have expected.  I don’t wish to gainsay those, but I thought of another argument yesterday.  Models are a way of indexing your thoughts.  A model can tell you which are the core features of your argument and force you to give them names.  You then can use those names to find what others have written about your topic and your mechanisms.  In essence, you are expanding the division of labor in science more effectively by using models.

This mechanism of course requires that models are a more efficient means of indexing thoughts than pure words or propositions alone.  In this view, it is often topic names or book indexes or card catalogs that models are competing with, not verbal economics per se.

The existence of Google therefore may have lowered the relative return to models.  First, Google searches by words best of all.  Second and relatedly, if you have written only words Google will help you find the related work you need, scholar.google.com kicks in too.  In essence, there is a new and very powerful way of finding related ideas, and you need not rely on the communities that get built around particular models (though those communities largely will continue).

It is notable that open access, on-line economics writing doesn’t use models very much and is mostly content to rely on words and propositions.  There are several reasons for this, but this productivity shock to differing methods of indexing may be one factor.

Still, it is not always easy to search by words.  Many phrases — consider say “free will” — do not through search engines discriminate very well on the basis of IQ or rigor.

The Peltzman Model of Regulation and the Facebook Hearings

If you want understand the Facebook hearings it’s useful to think not about privacy or  technology but about what politicians want. In the Peltzman model of regulation, politicians use regulation to tradeoff profits (wanted by firms) and lower prices (wanted by constituents) to maximize what politicians want, reelection. The key is that there are diminishing returns to politicians in both profits and lower prices. Consider a competitive industry. A competitive industry doesn’t do much for politicians so they might want to regulate the industry to raise prices and increase firm profits. The now-profitable firms will reward the hand that feeds them with campaign funds and by diverting some of the industry’s profits to subsidize a politician’s most important constituents. Consumers will be upset by the higher price but if the price isn’t raised too much above competitive levels the net gain to the politician will be positive.

Now consider an unregulated monopoly. A profit-maximized monopolist doesn’t do much for politicians. Politicians will regulate the monopolist to lower prices and to encourage the monopolist to divert some of its profits to subsidize a politician’s most important constituents. Monopolists will be upset by the lower price but if the price isn’t lowered too much below monopoly levels the net gain to the politician will be positive. (Moreover, a monopolist won’t object too much to reducing prices a little since they can do that without a big loss–the top of the profit hill is flat).

With that as background, the Facebook hearings are easily understood. Facebook is a very profitable monopoly that doesn’t benefit politicians very much. Although consumers aren’t upset by high prices (since Facebook is free), they can be made to be upset about loss of privacy or other such scandal. That’s enough to threaten regulation. The regulatory outcome will be that Facebook diverts some of its profits to campaign funds and to subsidize important political constituents.

Who will be subsidized? Be sure to watch the key players as there is plenty to go around and the money has only begun to flow but aside from campaign funds look for rules, especially in the political sphere, that will raise the costs of advertising to challengers relative to incumbents. Incumbents love incumbency advantage. Also watch out for a deal where the government limits profit regulation in return for greater government access to Facebook data including by the NSA, ICE, local and even foreign police. Keep in mind that politicians don’t really want privacy–remember that in 2016 Congress also held hearings on privacy and technology. Only those hearings were about how technology companies kept their user data too private.

The basic model for Puerto Rico isn’t working any more

That is the topic of my latest Bloomberg column, here is one bit:

Worse yet, the island has about $123 billion in debt and pension obligations, compared with a gross domestic product of slightly more than $100 billion, a number that is sure to fall. In the last decade, the island has lost about 9 percent of its population, including many ambitious and talented individuals. In the past 20 years, Puerto Rico’s labor force shrank by about 20 percent, with the health-care sector being especially hard hit. The population of children under 5 has fallen 37 percent since 2000, and Puerto Rico has more of its population over 60 than any U.S. state.

And then came Hurricane Maria.  According to a recent NYT piece, almost half of American’s don’t know that Puerto Ricans are American citizens.

In my considered opinion, using government money to help Puerto Rico has a much higher humanitarian return than devoting it to the further subsidization of health care.

Further reasons why the Mundell-Fleming model is simply, flat-out wrong

Most trade is invoiced in very few currencies. Despite this, the Mundell-Fleming benchmark and its variants focus on pricing in the producer’s currency or in local currency. We model instead a ‘dominant currency paradigm’ for small open economies characterized by three features: pricing in a dominant currency; pricing complementarities, and imported input use in production. Under this paradigm: (a) terms of trade are stable; (b) dominant currency exchange rate pass-through into export and import prices is high regardless of destination or origin of goods; (c) exchange rate pass-through of non-dominant currencies is small; (d) expenditure switching occurs mostly via imports and export expansions following depreciations are weak. Using merged firm level and customs data from Colombia we document strong support for the dominant currency paradigm and reject the alternatives of producer currency and local currency pricing.

That is from a new NBER working paper by Casas, Díez, Gopinath, and Gourinchas.  Here are my previous posts on Mundell-Fleming.

Modeling Donald Trump as a deal-maker

Imagine a politician who had trading in his utility function, and furthermore used some intertemporal price discrimination analogies from real estate — what might that look like?  That is the topic of my latest Bloomberg column, here is an excerpt from the latter part of the piece:

If those trades take up much of the legislative calendar over the next year or two (while the Republican majority remains secure), what might Trump then do next? He hardly seems like a caretaker president or someone who would enjoy presiding over gridlock.

Like a good real estate magnate, the next step then would be to turn to the lower valuation buyers – namely the Democrats in Congress – and offer them some lesser deals at lower prices. The Democrats are the buyers who will value dealing less because many (but not all) of their voters are less enthused about working with Trump, and also they value less what Trump might plausibly offer. Still, there will be room for further exchange.

If Trump, perhaps working with his daughter Ivanka, crafted a reasonable federal child-care or preschool policy, many Democrats would jump on board. They wouldn’t become Trump supporters in return, but they would moderate their criticism, as they would have a victory to take home to their voters.

In short, the second half of the Trump administration might consist of Trump offering a series of smaller but still significant trades to Democrats, taking care to bring along some Republican votes as well.

And what might the final fire sale consist of? Imagine Trump in his fourth year, essentially running for re-election as an independent, feeling he has nothing to lose and intent on showing that he has broken gridlock. Imagine a Trump whose ideology remains fluid and who loves to make deals more than to adhere to an ideological line. Could he also “sell” climate change legislation and a higher federal minimum wage to Democrats and a smattering of swing-district Republicans?

That is just one scenario, not an absolute prediction, but it would mean lots and lots of policy change under Trump.