Judgment Upgrade: Improving Forecasting Techniques

Twitter icon
Facebook icon
LinkedIn icon
e-mail icon
Google icon
 - forecast

A recent report on best practices in forecasting has the potential for long-range positive impact in many fields, including medical research and the development and operation of healthcare provider organizations. In an article published in Harvard Business Review(1), authors Paul J.H. Schoemaker and Philip E. Tetlock posit that it’s possible for almost any organization to improve its ability to forecast—if leaders have a thick skin. The process could prove embarrassing, since it will most likely expose the ineffectiveness of old methods and demonstrate that even the improved forecasting methods are not as accurate as one would like them to be.

The article takes as its starting point the statement by the U.S. National Intelligence Council, in October 2002, that Iraq possessed chemical and biological weapons of mass destruction and was producing more. The resultant costly and unproductive military action could have been avoided with better intelligence.

From this mistake, considerable research evolved, including a multi-year (2011-2015) prediction tournament funded by the Intelligence Advanced Research Projects Activity, in which five academic research teams competed in addressing various economic and geopolitical questions. The winning team, called the Good Judgment Project (GJP), was co-led by Tetlock and Barbara Mellers, PhD, of the Wharton School at the University of Pennsylvania. This project consisted of a series of forecasting contests, from which the organizers drew three main conclusions:

  • people with broad general knowledge often make better forecasts than specialists,
  • teams often out-predict individuals, and
  • proper training often enhances predictive talent.

These findings, the report says, could influence the way various organizations forecast outcomes such as business trends or employee performances, as well as financial results. The article stresses that the improved methods it recommends are not certain to produce accurate predictions—but if Company A is consistently a little better at judgment calls than is Company B, its competitive advantage will grow exponentially.

Methods of improving predictability will be more effective in some circumstances than in others, of course. Many issues already are highly predictable, given the proper tools, without any improvement in subjective judgment skills. Other issues are so little understood—and so unpredictable by their very nature—that new methods of forecasting them are almost impossible to develop.

Predicting the predictable

However, for a great many questions, there exist some data that can be

analyzed scientifically, and from which logical conclusions—or at any rate, predictions—can be drawn. These might include the efficacy of a product (or service) in development, the performance of a prospective investment or the suitability of a prospective new hire. Schoemaker and Tetlock warn that their methods and approaches have to be tailored to the specific needs of an organization, but they have identified a set of practices that in general might be useful to any individual or team that is analyzing a problem that requires improved subjective judgment.

“Most predictions made in companies, whether they concern project budgets, sales forecasts or the performance of potential hires or acquisitions, are not the result of cold calculus,” the authors note. “They are colored by the forecaster’s understanding of basic statistical arguments, susceptibility to cognitive biases, desire to influence others’ thinking and concerns about reputation. Indeed, predictions are often intentionally vague to maximize wiggle room should they prove wrong.”

The GJP offers training in probability concepts that measurably boost the accuracy of the trainee’s predictions. These include:

  • regression to the mean,
  • updating of probability estimates to allow for new data,
  • more precise definition of what is to be predicted and the time frame, and
  • basing the prediction on a numeric probability.

The article also notes the presence of confirmation biases in many groups and individuals. Researchers usually have pre-conceived hypotheses of what their conclusions will be (or should be), and will look for data that’s likely to lead to those conclusions. GJP’s methods teach trainees to beware of these biases, and look for evidence that might contradict them. GJP also teaches trainees to beware of “streaks,” or other deviations that might look like developing patterns but are usually explicable by the small