Thursday, July 25, 2013
The Quixotic Quest to Make Pundits Suck Less
Thanks to Sean for passing this along.
“The average expert’s forecasts were revealed to be only slightly more accurate than random guessing—or, to put more harshly, only a bit better than the proverbial dart-throwing chimpanzee. And the average expert performed slightly worse than a still more mindless competition: simple extrapolation algorithms that automatically predicted more of the same.” ~ Dr. Philip Tetlock
During the presidential election, statistician Nate Silver consistently predicted on his FiveThirtyEight blog (hosted by the New York Times) that the odds of Mitt Romney winning the presidency were slim. Leading conservative political experts like Karl Rove and Newt Gingrich called his claims ridiculous. Silver kept the faith in his poll numbers and models; the pundits trusted their “years of experience.”
In the end, Silver (whose background was the moneyball world of applying statistics to baseball) embarrassed the pundits by accurately predicting which states would be won by each candidate. But you wouldn’t know it by looking at the careers of the experts. News channels never stopped welcoming Newt Gingrich and his ilk as experts, while Karl Rove continues to accept healthy paychecks as an expert consultant for political campaigns around the world. The consequences of their nationally televised embarrassment were exactly nil.
It is this lack of accountability when it comes to making predictions that fascinates Philip Tetlock, a professor at the University of Pennsylvania. His interest in expert judgment began after the end of the Cold War. Despite both conservatives and liberals failing to predict how the Soviet Union would end, he saw both sides fit the emerging developments to their original ideas - the same ideas that had just failed.
………………
Related book:
Expert Political Judgment: How Good Is It? How Can We Know?
Related link:
HOW TO WIN AT FORECASTING: A Conversation with Philip Tetlock
Newer Post
Older Post
Home