Corporate Prediction Markets Work Well

Prediction markets predict public events such as election outcomes better than do polls or other forecasting mechanisms. Internal corporate prediction markets in events such as sales forecasts, product launch times, and product feature demand have been less well studied. Internal corporate markets tend to have fewer participants than public markets and the participants often have strategic interests and biases. Thus, it has been an open question how well these markets operate.

Cowgill and Zitzewitz report on a number of different types of prediction markets run by Google, Ford and Firm X and although they find evidence for some biases they also find that corporate prediction markets also work better than alternative forecasting methods.

Despite large differences in market design, operation, participation, and incentives, we find that prediction market prices at our three companies are well calibrated to probabilities and improve upon alternative forecasting methods. Ford employs experts to forecast weekly vehicle sales, and we show that contemporaneous prediction market forecasts outperform the expert forecast, achieving a 25% lower mean-squared error (p = 0.104).

…The strong relative predictive performance of the Google and Ford markets is achieved despite several pricing inefficiencies. Google’s markets exhibit an optimism bias. Both Google and Ford’s markets exhibit a bias away from a naive prior (1/N, where N is the number of bins, for Google and prior sales for Ford). However, we find that these inefficiencies disappear by the end of the sample. Improvement over time is driven by two mechanisms: first, more experienced traders trade against the identified inefficiencies and earn higher returns, suggesting that traders become better calibrated with experience. Secondly, traders (of a given experience level) with higher past returns earn higher future returns, trade against identified inefficiencies, and trade more in the future. These results together suggest that traders differ in their skill levels, they learn about their ability over time, and self-selection causes the average skill level in the market to rise over time.

Addendum: It’s an interesting commentary on academic publishing that Marginal Revolution first covered this paper in a working version in 2008! An extended version was received by the Review of Economic Studies in 2010 which accepted a final version in 2014 and then published the paper in 2015.


No comments yet? Scott Sumner has a hard on reading this post...

Very grateful if anyone has ungated link, or could send version to me?

Academic publishing is a disaster.

depends on where you're sitting -- the publish/perish model works OK for many academics as comfortable livelihood; publication quality and timeliness are far lesser concerns. result is most academic publishing is mere noise.

7 years to get a trivial study published highlights the time value of information -- even good accurate information has a half-life of influence/relevance/value to its field.

half-life on this referenced study is about 1-2 years, whereas a comparable level study in the hard science of physics would be about 10 years.

but yes it's a mess to outsiders looking up at the academic publishing ivory tower.

Given what you say about timeliness and half-lives (which I agree with), it is striking that the data from this study is unavailable for proprietary reasons. No doubt this stance can be defended, but really???

Academic here. Today, the main purpose of journals is not to communicate findings, but to give a stamp of approval for the quality of the research. That's why people distribute working papers online. The findings didn't change much if at all between 2008 and 2015. It was a process of vetting.

Science and Nature function differently: they play both roles. But by speeding up the process, a lot of false-positive studies get published. Is this better?

I agree that Science and Nature result in a lot of false positives, but not that speed has anything to do with it. It's also not clear to me that the stamp of approval of "traditional" journals has anything to do with usefulness, novelty, or correctness.

Could we get a tldr on how these prediction markets function? What's in it for the participants?

Quick disclaimer, I work at a prediction market company that sells to companies internally forecasting.

Basically, they work by aggregating people's forecasts, but in a much better way than just taking the median or average of the forecasts. Participants are given "money" and can buy and sell shares of the results of an event to move the "price" to the percentage they think is correct. So if you're trying to forecast sales of cars, and the market says the probability of selling 100,000-125,000 cars is 23%, and you think it should be more like 30%, you can buy shares to bump up the probability. In a prediction market, users have a limited amount of money to spend, which makes them think about where they want to spend it, which ends up being in questions that they have a better idea of the likely result.

As for what's in it for those participants, that's the golden question for prediction market. As noted above, in a company, you're dealing with a limited number of people, most of whom don't want to spend extra time away from doing their normal job to forecast. On the public sites (, you get a number of people who just like thinking about these things and being correct. The hope is that there are people in a company who think this way too. Usually there needs to be other incentives in a company, but what those incentives are depend on the company culture. We have found the most successful of these are something like recognition by leadership, special bonuses, or even making forecasting part of the job description. But figuring out these incentives are part of the process for every company we work with.

Prediction markets have been VERY well studied in corporations, and, in fact, the earliest uses of internal corporate prediction markets was by HP in 2004.

in 2008 the paper was motivated toward something called "economic microgeography". Now, it is the efficiency of betting markets.

By "work well" you mean "forecast accurately." That is quite different from being recognized as profiting business value worth their costs. By that measure these markets have failed, because these firms have not continued to use them.

Damn spell "correct" - I meant "providing" not "profiting".

I wonder if there aren't insitutitional reasons militating against even if they do add value. For example, can you imagine a bank using internal betting market to predict corporate defaults? No matter how accurate, you'd never get it past the regulators. The stories matter, even if they're just rationalisations. So good storytellers are more valued than good forecasters. But I haven't read the paper, because it's behind a ****ing paywall.

Either that, or the experts in these companies whose jobs would be replaced by prediction markets are good at protecting their jobs.

I thought I read this from you in 2008, on this very post? Too lazy to look it up, but, if so, that's very lazy of you to cut and paste the same comment from back then, now. And what you wrote makes no sense (gibberish), as you don't specify which 'firms' you have in mind.

There's an earlier version that is ungated and implies that Firm X is Koch Industries ...

These results (at least in the earlier version) seem consistent with a maximum entropy approach:

It is good to hear about predications, but following them is never great idea, so that’s why we need to be careful that we don’t completely go into trap of following predictions, it is good to go with it to a certain level, but doing it completely without much money management is the way to losses only and due to OctaFX broker, I am able to work things very well with their 50% bonus on deposit, so that helps in proper risk management.

KQCGqghczOixunRzHQ 2005

Comments for this post are closed