Abstract
Forecasting tournaments are level-playing-field competitions that reveal which individuals, teams, or algorithms generate more accurate probability estimates on which topics. This article describes a massive geopolitical tournament that tested clashing views on the feasibility of improving judgmental accuracy and on the best methods of doing so. The tournament’s winner, the Good Judgment Project, outperformed the simple average of the crowd by (a) designing new forms of cognitive-debiasing training, (b) incentivizing rigorous thinking in teams and prediction markets, (c) skimming top talent into elite collaborative teams of “super forecasters,” and (d) fine-tuning aggregation algorithms for distilling greater wisdom from crowds. Tournaments have the potential to open closed minds and increase assertion-to-evidence ratios in polarized scientific and policy debates.
Get full access to this article
View all access options for this article.
