contributor perspectives

The Numeric Problem

Rodger Baker
Senior VP of Strategic Analysis, Stratfor
8 MINS READNov 16, 2016 | 08:00 GMT
There is a certain freedom that comes with thinking beyond the confines of the quantifiable. It is how we evaluate our forecasts.
(mindscanner/Shutterstock)
Here at Stratfor, we are in the midst of creating our annual forecast for 2017. Each time we build one of our forecasts -- whether it is a view of the next quarter, year or decade -- we also review our past performance, focusing on what we got wrong and what we missed in an effort to improve our processes. One question routinely comes up as we re-evaluate our approach: How can we relay our level of confidence in our predictions to readers?
Contributor Perspectives offer insight, analysis and commentary from Stratfor’s Board of Contributors and guest contributors who are distinguished leaders in their fields of expertise.

Here at Stratfor, we are in the midst of creating our annual forecast for 2017. Each time we build one of our forecasts — whether it's a view of the next quarter, year or decade — we also review our past performance, focusing on what we got wrong and what we missed. Such efforts help us improve our processes. One question routinely comes up as we re-evaluate our approach: How can we relay our level of confidence in our predictions to readers?

According to Dan Gardner and Philip E. Tetlock, the best way would be to assign numerical values to our probabilities instead of less definable words such as "likely," "probably" or "expected." And in some ways, they have a point. Forecasts based solely on claims about what "may" or "could" happen have next to no value, and when such ambiguous words are included in legitimate estimates they rarely help to clarify the analyst's meaning. In fact, they can sometimes even become the source of confusion. "Turkey may cancel its immigration deal with Brussels" is no more or less definitive than saying that "Turkey may not cancel its immigration deal with Brussels." Gardner and Tetlock's concern with using language that is open to interpretation, such as "likely" or "highly probable," is valid, too. After all, there have been many cases where these words have been interpreted differently by different audiences.

But that doesn't mean words are useless, and there is a great deal of value in the qualitative aspect of forecasting. In a world becoming obsessed with "big data" and the assumption that there is an answer to everything, so long as we crunch enough numbers, there is a certain freedom that comes with thinking beyond the confines of the quantifiable. And in a world looking for increasingly simple answers, there is something to be said for exploring complexities and embracing uncertainties. 

Searching for More Accurate Estimates

One of the most interesting aspects of Tetlock's forecasting model, developed in part for the Intelligence Advanced Research Projects Activity (IARPA), is the way it breaks down a broader forecast into definable parts. Each statement has a binary outcome that must take place within a given period to be considered accurate: Will X do Y before Z date? These discrete events are then given a probability of happening within the time allotted. By focusing on clearly definable events in a clearly delineated time frame, the forecaster is making a bet on a question that, in many cases, has only two possible answers. Forecasters are then grouped into teams to create aggregate data sets, which, in Tetlock's experiments, yielded more accurate forecasts — at least of the discrete events being studied.

Tetlock is not the only expert exploring the role of quantitative methods in forecasting, either. In a recent article in the Journal of Global Security Studies, Dr. Michael Ward called for more forward-looking analysis and discussed the evolving use of numbers in making predictions. An example he gave was the weather forecaster: Today, weather forecasters can take numerical information about any number of small variables, feed it into a computer, and build out far more accurate forecasts than they could decades ago. Apply the same concept to the international system, Ward says, and its complexity can be captured numerically, broken into many small calculations, and reassembled into a more complete forecast.

I find myself in agreement with both men on the need for more accurate and accessible forecasts. I also believe that forecasts must be tracked, a record of accuracy must be kept, and successes and failures must be used to improve the predictive process. But where we differ is that I see value in the qualitative as well as the quantitative. All too often, the estimates that we work with are not of binary events, but of future trends with many different potential outcomes. It's not that global systems are unpredictable — in fact, Stratfor is based on the premise that for the most part they can be predicted — but that we tend to focus on the broader patterns instead of the individual events they are comprised of. 

Where Numbers Work . . .

Our forecast is a strategic forecast that rises above the noise of politics and current events, and instead seeks to pinpoint the broader trends that will shape the global environment in the years to come. The geopolitical model we use, rooted in the influence of place on the progress of organized groups of people over time, offers a synthetic approach to understanding the past, explaining the present, and peering into the future. It combines the study of geography, politics, economics, security, society, history and technology to tease out the deeper shifts underway that are molding the international system. Just as in economics, there is an invisible hand in politics and international relations. 

Geopolitics helps us to cut beneath the churning surface of day-to-day events and catch a glimpse of the undercurrents. It is the search for the permanent (relatively speaking), and the interaction between those enduring truths and the ever-changing circumstances surrounding them. Geopolitics gives context to the present and a window into the future, though by its very nature it is better suited to the longer term than to the immediate. That there will be daily ups and downs is a given, but the shorter the time frame in question, the more specific the prediction and the greater the impact of individual choice and randomness will be. This is where intelligence kicks in, and where the quantitative becomes more applicable.

If we look back at the question as to whether Russia would try to blaze a land bridge to Crimea, for example, an assessment of distance, materiel, logistics and lines of defense — all quantifiable factors — alone would not have provided the answer. Nor would it have revealed the constraints within which Russian leaders had to make their decisions. At the level of the individual policymaker, objective realities usually outweigh subjective desires.

The same case can be made for China, which has rapidly transformed from a country self-sufficient on key commodities to one that often consumes twice as much as it produces. This quantifiable change in economic dependency has driven Chinese leaders to secure access to those commodities abroad and to ensure that their supply lines remain uninterrupted. The way Beijing has chosen to do that is to develop its navy and assert its maritime claims in the South and East China seas. It did not have to follow this path, but quantitative factors exerted pressure on it to meet these imperatives in some way.

. . . And Where They Fall Short

Clearly, the quantifiable is critical to figuring out the smaller components of a strategic forecast. Economic factors, social pressures, political balances, correlation of forces, seasonality, infrastructure and the like all rely in some way on numbers. But the further out we move in time and focus, the more the volatile swings of the short term smooth out into sweeping trends — and trends are not events that fall neatly within a specific range of numerical probability. How can we possibly test whether Russia's compulsion to reassert its dominance in the regions along its borders, in the hope of creating a buffer zone to better deal with its own economic and demographic issues at home, has a likelihood of 72 percent, 68 percent or any percentage at all? This is a pattern, not a discrete event that can be precisely and objectively measured.

Another challenge in trying to use numbers to communicate confidence in a prediction is that, if they aren't derived from quantifiable factors plugged into an algorithm, they often become ways for an analyst to just express a gut feeling. Even if that gut feeling is informed by experience, research and excellent intuition, the number itself has no statistical foundation and, in many ways, is no more precise than tying a numerical range to terms like "probably." Moreover, from a strategic planning perspective, is there really a different way to prepare for a general shift over the next five years that has a 74 percent chance of happening as opposed to a 78 percent chance? Does the appearance of specificity in trend analysis truly lend itself to more effective planning or countermeasures?

For dichotomous events, there might be more room for that kind of numeric estimate. But even then, it rarely amounts to much more than an indicator of the analyst's intuitive prediction, rather than the output of a statistical pattern of past and present variables. Assigning a value does force an analyst to be more specific, but so can an internal process that doesn't allow for ambiguity. A quantitative approach isn't the only way to push analysts to test and challenge each assertion or component of a forecast, or to test the logic of all components in their interaction.

Forecasting, without a doubt, faces problems in the tendency to hedge, and to leave room for interpretation so as to avoid taking responsibility for an outcome that doesn't match the forecast. But these problems are matters of the analytical process and of rigor more than they are of language. Clarifying word choice and requiring percentages instead of descriptors are simply both tools we can use to try to counter intentional vagueness, force analysts to overcome the fear of error, and make a clear call about what the future will look like.

Article Search

Copyright © Stratfor Enterprises, LLC. All rights reserved.

Stratfor Worldview

OUR COMMITMENT

To empower members to confidently understand and navigate a continuously changing and complex global environment.

GET THE MOBILE APPGoogle Play