Wednesday, April 23, 2014

Intelligence Assessment And Unpredictability (Follow Up To: How To Analyze Black Swans)

(Note:  I have known Greg Fyffe, former Executive Director of the Intelligence Assessment Secretariat in the Privy Council Office in Canada for a number of years.  His long experience in intelligence and specifically in intelligence analysis places him in a unique position with respect to the Black Swan issue.  He took his time to share some of his thoughts with me and others in the International Association For Intelligence Education Forum recently.  I asked him if he would allow me to re-print his comments here and he agreed.)

By Greg Fyffe
(Executive Director, Intelligence Assessment Secretariat, Privy Council Office, Ottawa, 2000-2008)

Kris Wheaton’s reflection on his class discussion (How to Analyze Black Swans) is useful in understanding the challenge of elaborating future possibilities in intelligence assessments. Mark Lowenthal’s comment on Black Swans also illuminates an important aspect of uncertainty—while a situation may be completely mysterious to some observers, others may be very familiar with every aspect of it. Their knowledge, however, may not be transferable to intelligence analysts. 

While many assessments provide useful contextual background, the most helpful to clients are those which are insightful in forecasting what will happen next. This is obviously not always easy. These are my observations based on eight years as the head of an intelligence analysis unit. 

Some events are high impact and high probability. Analysts may not be alone in seeing the significance of a developing situation, but if they can add evidence to the estimate, that is helpful; additional evidence that an event may well occur justifies more definite action to prepare for it.  

Analysts frequently need to look at possibilities which are low probability but high impact.  There are many ways of doing this, including scenario-building, or other speculative techniques, but since there are many significant but low probability possibilities, there are obvious risks in taking expensive preventative action.

Kris uses the pile of sand metaphor to describe a situation which appears increasingly volatile. For analysts this is a very common analytical challenge. The possible event is high impact, high probability but also unpredictable. Even though a possible outcome appears highly probable, it may never occur, or it may be triggered by random events, so even though analysts should have seen it coming, they could not. A common example of this is an authoritarian regime. Rulers are unpopular, the economy is distorted, security services are cruel and hated, and the population is unable to express dissent. If something triggers a revolt a mass uprising can quickly overturn a regime which seemed in complete control. But, sometimes the trigger event never happens and an authoritarian regime is able to stay in power for a very long period. The fall of the Soviet Union is an example of this. Analysts could describe the vulnerabilities of the regime, but the regime’s ability to contain those vulnerabilities was effective for decades. 

Maybe the eventual outcome is truly inevitable in the long run, but if the time possibilities range over many years, then analysts cannot offer much useful anticipation to clients. In these cases the role of analysis is “building block” assessments, elaborating the reasons why the underlying trends in a political system are potentially explosive, describing possible warning signs for the explosion, and paying close attention to intelligence or diplomatic reporting that may suggest the explosion is an imminent possibility. In the end, the trigger may be something that was not foreseen, because it could not be foreseen. 

A Tunisian vendor burns himself alive, and suddenly becomes the symbol and rallying point for a suppressed population.  

The classic challenge for intelligence systems is the surprise planned by a few. An event seems unlikely or illogical, but a few do know, and intelligence about their knowledge could make a difference. This category could be broken down into the monitored and unmonitored. In the former possible perpetrators are being watched (as in 911) but their actions are still a surprise. In the unmonitored situation the actors were not being tracked and their actions were therefore not anticipated. A new terrorist group, in a region not considered at risk, is only seen after its first attack. Events in these two categories are at least potential knowable, since there are people with knowledge and they are probably communicating. However, intelligence systems, unless blessed by luck, do not have the resources to look everywhere with intensity, and if they do, the amount of information produced may simply lead to another type of assessment dilemma—unmanageable volume. 

Some events with high impact are completely unpredictable. These might include the early death of an important leader, some types of natural disasters, an act by someone who was mentally disturbed, or more frequently, the third and fourth order consequences of an event which were not foreseen because the possible combinations after the initial event were simply too numerous or hidden to comprehend. The possibilities extend beyond imagination. 

What do these variations mean for intelligence systems and particularly for intelligence assessment? 

  • First, attempts to guess, ask “what if” questions, and other speculative techniques, are necessary to identify possibilities, particularly those worth further analysis.
  • Second, background pieces which identify curious combinations of circumstances, inexplicable course of action by key actors, or updated contextual information, are necessary to track pressures which may be building to a critical point.
  • Third, background and speculative pieces need to be widely shared so that the possibility increases that someone will have additional insight into what they mean.
  • Fourth, there will always be events that were not foreseen. The challenge for the analyst is to quickly assess their significance and describe them vividly. 
  • Fifth, we cannot foresee all the possible consequences of an event, but if analysts are alert to possibilities they can forecast as events develop. As one possibility becomes a reality and others are eliminated, analysts can then predict another set of possibilities. It is very difficult to see third and fourth order consequences, but it is helpful to clients if the running forecast can see the immediate and secondary consequences of a developing story. 
  • Sixth, some high probability events will never occur, despite all the evidence that they should. Analysis typically describes pressures that are building, but also the counter-forces that may contain them. A prediction can be made on the balance of probabilities or apparent trends, but in the end the pressures for change do not produce change. A volatile unstable situation continues to be volatile and unstable. 

We often refer to “noise” in intelligence as the reason for not seeing what was about to happen—there are so many bits of information about future developments that the truly significant pieces are only seen clearly after events have taken place. The other dimension of this dilemma is that the analyst faces a very large number of future possibilities, whatever their level of probability, and whether or not there is supportive evidence. 

When an assessment failure is reviewed we seen a sequence of events that did occur, and look at the information that the analyst could have used, and the possibilities that could have been taken more seriously. 


The analyst asking important questions often faces a vast amount of information of marginal utility or reliability, not enough high relevance and high quality intelligence, and a large number of plausible futures. The truly gifted individual, or the truly functional assessment process, is still able to assess likely outcomes often enough to be indispensable to decision-makers. 

No comments: