Friday, May 6, 2011

How Accurate Is Your Pundit? (Hamilton.edu)

Allen Thomson, the unusually keen-eyed observer of all things odd and analytic, brought a study by a group of students at Hamilton College to my attention recently.

Titled, Are Talking Heads Blowing Hot Air:  An Analysis Of The Accuracy Of Forecasts In The Political Media, the study, complete with detailed annexes and  statistical analysis, assessed the accuracy of 26 pundits in the media with regard to political forecasts made in 2008.

The students used something they called a Prognosticator Value Score (PVS) to rank each of the 26.  The PVS factors in how many predictions were made, how many were right, how many were wrong and on how many the prognosticators hedged.

The best?  Paul Krugman with a PVS of 8.2 (You can see a screenshot of his score sheet to the right.  Note:  Score sheets for each of the pundits are in the full text document).

The worst?  Cal Thomas, with a PVS of -8.7 (You read that right.  Negative eight point seven...).

The students were able to confirm much of what Tetlock has already told us:   Many things do not matter -- age, race, gender, employment simply had no effect on forecasting accuracy.

The students did find that liberal, non-lawyer pundits tended to be better forecasters but the overall message of their study is that the pundits they examined, in aggregate, were no better than a coin flip. 

This is more interesting than it sounds as one of Tetlock's few negative correlations was between a forecaster and his or her exposure to the press.  The more exposure, Tetlock found, the more likely the forecaster was to be incorrect.  Here, there may be evidence of some sort of "correction" that is made internally by public pundits, i.e. people who make a living, at least in part, making forecasts in the press.

I have a few methodological quibbles with the study.  Number of predictions, for example, did not factor into the PVS.  Kathleen Parker, for example, made only 6 testable predictions, got 4 right and had a PVS of 6.7.  Nancy Pelosi, on the other hand, made 27 testable predictions, got 20 right, but had a PVS of only 6.2.

Despite these minor details, this study is a bold attempt to hold these commentators accountable for their forecasts and the students deserve praise for their obvious hard work and intriguing results.

3 comments:

Andrew said...

Interesting. I interpreted this as counter-Tetlock (of course I haven't seen your entire study). I consider Krugman a pretty intense expert on the subject of economcs (PhD, Nobel, etc.)...so why does his expertise rank so much higher than other "generalists"?

I could be wrong, but I think he was even mentioned specifically in Tetlock's book as an example on an expert who wasn't necessarily better at predicting than others...

Kristan J. Wheaton said...

Andrew,

We did not do this study at Mercyhurst. It was done by students at Hamilton College.

The students indicated that their aggregate findings were generally consistent with Tetlock's.

They, however, looked at only a slice of the estimative conclusions (those circulating around in advance of the 2008 elections) of the various pundits.

it is entirely possible that Krugman was only good/lucky in that specific circumstance or for those specific questions.

Kris

Kent Clizbe said...

Nancy Pelosi a "Good" prognosticator?

Nancy Pelosi, Oct, 2010: "I have every anticipation that we will come together...with me as speaker of the house," after the election.

The inherent biases of this study were clearly evident when they coded Sam Donaldson as "conservative."

Sam Donaldson:
"The problem Republicans have, so many of them are sanctimonious."

But the bias of timing the data gathering period as the anti-Bush election cycle of 2008 is almost beyond comprehension.

Comparing Republican "predictions" during the run-up to the 2008 election, of course, resulted in Republicans being mostly wrong. And vice versa for Democrats in that election.

If the data had been collected during the run-up to the 2010 election, the results would have been exactly opposite. Nancy Pelosi's partisan posing would have been scored negative, and George Will's more conservative predictions would have been more correct.

The first thing an analyst has to do is to examine their own biases.

This "study" scores "fail" for biased selection of time period to collect data.

The fact that the study is a product of Hamilton College is even worse for its bias score. Hamilton College is notorious for its actions taken against a couple of non-liberal professors:
http://www.nas.org/polArticles.cfm?doctype_code=Article&doc_id=1268

Maybe a better title for their study would be: "How Biased is your Professor?"