Phil Tetlock: When Should We Listen To Experts?
Philip E. Tetlock is a psychologist who is Professor of Leadership at the Haas School of Business at the University of California, Berkeley. The book combines several of his research interests such as how experts learn (or not) from experience and de-biasing judgment and choice to overcome common cognitive biases such as belief perseverance and over-confidence. In 2005 the American Political Science Association awarded Expert Political Judgment both the Woodrow Wilson Award for best book published on government, politics, or international affairs and the Robert E. Lane Award for best book in political psychology.
The experts barely if at all outperformed informed non-experts and neither group of forecasters did well against simple rules and models. When Tetlock divided outcomes into three states, “Better”, “Same”, and “Worse”, the forecasters frequently did worse than assuming that each state is equally likely.
That experts are no better forecasters than informed non-experts is not a new result. One of us (Armstrong 1980), based on a review of about a dozen small-scale studies, proposed the “seersucker theory” of forecasting: “No matter how much evidence exists that seers do not exist, suckers will pay for the existence of seers.” Beyond a low threshold, additional expertise did not result in greater accuracy in forecasting, and there was some evidence that those with the most expertise were less effective at using new information and thus less able to improve their accuracy.
Tetlock’s primary finding is that political experts are poor forecasters. He demonstrates this with a large sample of forecasts and with comparison to reasonable alternatives. At the very least, then, decisions makers should invest heavily in developing contingency plans as the forecasts by experts are of such little value.