Cato's Justin Logan wrote a smart piece in National Interest about applying to predictive markets to TV pundits and intelligence officials:
Foreign-policy analysts have an incredibly difficult task: to make predictions about the future based on particular policy choices in Washington. These difficulties extend into the world of intelligence, as well. The CIA issues reports with impossibly ambitious titles like "Mapping the Global Future", as if anyone could actually do that. The father of American strategic analysis, Sherman Kent, grappled with these difficulties in his days at OSS and CIA. When Kent finally grew tired of the vapid language used for making predictions, such as "good chance of", "real likelihood that" and the like, he ordered his analysts to start putting odds on their assessments. When a colleague complained that Kent was "turning us into the biggest bookie shop in town", Kent replied that he’d "rather be a bookie than a [expletive] poet."
Kent’s instinct was right. More bookies and fewer poets are what the United States needs, both in intelligence analysis and in foreign-policy punditry. University of California Berkeley professor Philip Tetlock examined large data sets where experts on various topics made predictions about the future. He was troubled to discover "an inverse relationship between how well experts do on scientific indicators of good judgment and how attractive these experts are to the media and other consumers of expertise." He proposed one way to reform the situation: conditioning experts’ appearance in high-profile media venues on "proven track records in drawing correct inferences from relevant real-world events unfolding in real time."
It’s a fair question. The best way to correct the situation is by developing a predictions database, where experts can weigh-in on specific, falsifiable claims about the future, putting their reputations on the line. Something like this was envisioned in a DARPA program developed under Admiral John Poindexter in 2003. The so-called "policy analysis market" was designed to allow analysts to buy futures contracts for various scenarios. As the value of these contracts went up or down, other analysts could observe and investigate why, determining how and why others were "putting their money where their mouths were", and whether they should do the same.
But the "policy analysis market" sank beneath a wave of demagoguery from congressmen who had an astonishing lack of understanding how prediction markets are used to great effect in the investment banking, insurance and other industries.
I bet if the idea of predictive markets had been proposed by someone other than a quack like John Poindexter, we'd probably have a Intellindex to go along with other recent innovations such as intel analyst blogs and Intellipedia. I'm particularly enamoured with Intellipedia because even its unclassified level is great resource offering everything from long-view analyses to organization charts.
There is one issue that most folks overlook when discussing markets in the context of the Intelligence Community. Managers and executives should be careful when trying to incentivize participation in such an index. Offering bonuses to analysts who are consistently correct is one thing, but tying market performance to personnel evaluations will probably encourage analysts to make overly conservative wagers.