Philip Tetlock’s Good Judgment Project

Philip emails me:

Your recent book was very persuasive–and I see an interesting connection between your thesis and the “super-forecasters” we have been trying to select and then cultivate in the IARPA geopolitical forecasting tournament.

One niche we humans can carve out for ourselves is, under certain fleeting conditions, out-smarting algorithms (one of the extreme challenges we have been giving our supers is out-predicting various wisdom-of-crowd indicators).
You have brought us many forecasters over the years (including some “supers”) so I thought your readers might find the attached article on the research program  in The Economist of interest.
Our recruitment address is:

The website writes:

The Good Judgment Project is a four-year research study organized as part of a government-sponsored forecasting tournament. Thousands of people around the world predict global events. Their collective forecasts are surprisingly accurate.

You can sign up and do it.  Here is a related article from The Economist.  Here is a good Monkey Cage summary of what they are doing.


Enders Game

Comments for this post are closed

It's fun--I participated for part of round 1. But to do it well (to both answer all the questions and to update your estimates as new info becomes available) is very time-consuming.

How did Robin's method do?

I did round one also, I found that I wanted to see more of the collective results (which was my interest in the first place) than were available. So dropped out.

Might be time to have another go!

Comments for this post are closed

I agree, fun and time consuming. Also, could be a lot more fun with a better UI

Comments for this post are closed

Comments for this post are closed

I only make predictions about fields where I've been studying the data for over 40 years, such as school test scores. I turn out to be remarkably less surprised by current developments in that topic than, say, the editors of the New York Times.

Comments for this post are closed

There is a fundamental epistemic issue at work in prediction, which is that audiences are most interested in subject where unbiased experts are close to evenly divided: Who is going to win the Super Bowl? Auburn v. Florida State? Is the stock market going up or down on Monday.

In contrast, interest fades as answers become obvious. Back in the 1940s, over 100,000 football fans would show up Soldier Field in August to see an exhibition game between last years NFL champion team v. last year's college football All-Americans. It was very exciting because the teams were pretty evenly matched.

Over time, the NFL champs got so much better than the youngsters, however, that it became ho-hum. When lightning stopped a game partway in 1974, the game and series were never resumed.

I don't try to forecast events like the Super Bowl where there are a lot of relatively unbiased experts competing. Instead, I try to focus upon areas like social policy in which the experts aren't all that expert and massive career sanctions threaten anybody who publicly explains what's going on in Occam's Razor terms. So, I have an extremely good record of making predictions about which study endorsing the latest panacea isn't likely to hold up.

What I do is pretty easy, and I'm sure lots of people could do it better than me, but why should they? What's the incentive structure? There are strong incentives in place for bad predictions in the social policy and analysis field, and people are routinely rewarded for being wrong. So, America gets more of what it pays for.

Comments for this post are closed

I did Round 1 - it was fun, but time consuming. I'm still spending the $150 Amazon gift card I got.

Comments for this post are closed

Comments for this post are closed