Monday, November 24, 2014

The Game Every Intel Professional Should Play Is Now For Sale!

CVTV, a new version of the ancient game of Hnefatafl
Hnefatafl is one of my favorite games.  Quick to play, easy to learn, created by Vikings - what's not to love?

I recommend it to intelligence professionals, however, because it is an asymmetric game that forces players to really think like their opponents to win.  That, in my estimation, is a skill worth learning.

Last year, I ran a successfully funded Kickstarter project to produce my variant of this game.  Well, I have finally managed to fulfill all of the rewards promised under that campaign (long story...don't ask...) and am now able to make the game available to the general public!

If you are just interested in learning the kinds of things I talked about in the interview below or in just owning a nicely made, portable variant of the game, then I recommend the basic set.  This is also the set I would recommend to educators and trainers who would like to use the game to foster a discussion on asymmetry of goals, forces or geography (Contact me directly for discounts on bulk orders).


If, on the other hand you are into Cthulhu or into Vikings or, better yet, into Cthulhu vs. The Vikings, I recommend you think about buying the deluxe set (and, what the heck, you might as well get the comic to go with it - so you'll know the story!).

Whichever set you get, I am certain that you will enjoy the game!  And Happy Thanksgiving to all!

Tuesday, October 21, 2014

Leksika - A New Site On Russia, Eurasia Worth Watching!

I just received a note from one of the sharper crayons that has emerged from the Mercyhurst Intel Studies box - Spencer Vuksic.  

Spencer, a truly gifted analyst and Russian linguist currently seeking his masters in International Studies from Johns Hopkins, has, along with a fellow Mercyhurst alum Graham Westbrook, started a new project - Leksika - to provide open source intelligence analysis on all things Russian and Eurasian.

According to Spencer, "Leksika’s value proposition is in the application of intelligence analysis to political, social, and economic shifts in the region in opposition to the largely polarized reporting from both the West and Russia."  

Just in the last month they have published short, easy to read but highly informative pieces covering such diverse topics as Russia's partnerships with Serbia and Latvia, the current situation in Crimea, Israeli and Russian relations and Poland's geopolitical positioning.  Earlier posts have reached even more broadly including a three part series on Russia's cyber strategy.

One of the most interesting and FREE features of the site is their "ReapReport".  Here they do a side by side comparison of the top news stories coming out of western and Russian media.  More importantly, they add a highly useful "What to Watch" blurb in order to highlight upcoming events of interest.

While clearly still in the start-up stage, Leksika is already quite good and has the potential to be a one-stop shop for unbiased analysis of Russia and Eurasia.  Recommend you check it out and take advantage of the free subscriptions!

Tuesday, October 7, 2014

Intelligence And Ethics: Now You Can Check The Codes!

Some would say that ethics in intelligence is an oxymoron.  
Certainly, the ethical challenges faced by today's intelligence professionals are more severe than any other time in human history.

It is interesting to note, in this regard, that all three of the major sub-disciplines of intelligence (national security, law enforcement and business) now have publicly available codes of ethics for their practitioners.

The most recent of these is from the newly minted National Intelligence Strategy:
As members of the intelligence profession, we conduct ourselves in accordance with certain basic principles. These principles are stated below, and reflect the standard of ethical conduct expected of all Intelligence Community personnel, regardless of individual role or agency affiliation. Many of these principles are also reflected in other documents that we look to for guidance, such as statements of core values, and the Code of Conduct: Principles of Ethical Conduct for Government Officers and Employees; it is nonetheless important for the Intelligence Community to set forth in a single statement the fundamental ethical principles that unite us and distinguish us as intelligence professionals. 
MISSION. We serve the American people, and understand that our mission requires
selfless dedication to the security of our nation.
 
TRUTH. We seek the truth; speak truth to power; and obtain, analyze, and provide
intelligence objectively.
 
LAWFULNESS. We support and defend the Constitution, and comply with the laws of the United States, ensuring that we carry out our mission in a manner that respects privacy, civil liberties, and human rights obligations. 
INTEGRITY. We demonstrate integrity in our conduct, mindful that all our actions, whether public or not, should reflect positively on the Intelligence Community at large. 
STEWARDSHIP. We are responsible stewards of the public trust; we use intelligence
authorities and resources prudently, protect intelligence sources and methods diligently,
report wrongdoing through appropriate channels; and remain accountable to ourselves,
our oversight institutions, and through those institutions, ultimately to the American people.
 
EXCELLENCE. We seek to improve our performance and our craft continuously, share
information responsibly, collaborate with our colleagues, and demonstrate innovation and agility when meeting new challenges.
 
DIVERSITY. We embrace the diversity of our nation, promote diversity and inclusion in our workforce, and encourage diversity in our thinking.
The Society of Competitive Intelligence Professionals has long had a code:
To continually strive to increase the recognition and respect of the profession.
To comply with all applicable laws, domestic and international.
To accurately disclose all relevant information, including one's identity and organization, prior to all interviews.
To avoid conflicts of interest in fulfilling one's duties.
To provide honest and realistic recommendations and conclusions in the execution of one's duties.
To promote this code of ethics within one's company, with third-party contractors and within the entire profession.
To faithfully adhere to and abide by one's company policies, objectives and guidelines.
Finally, the International Association of Crime Analysts offers this as ethical guidelines to its members:
Theoretically today’s professional crime analyst is expected to know the details of every crime in his or her jurisdiction, and often to predict when the next will occur. In reality, the title of crime analyst can mean different things even in the same agency. Generally, a crime analyst is one who monitors crime trends and patterns, researches and analyzes similarities and differences in crime details, and reports those findings to the appropriate personnel that can address those crimes either through deterrence or prevention. Many skills and abilities are necessary to complete the crime analysis process. Necessary skills include logic and critical thinking, research skills, organizational skills to organize facts and findings, written and oral communication skills, and computer skills. Necessary personal traits include a desire to aid in the reduction of crime through the legal and ethical examination of crime facts and data.
The professional crime analyst assists law enforcement managers in decision making, supports street officers and detectives with information helpful to their jobs, and provides service to other crime analysts and to the general public. As professional crime analysts, we commit ourselves to the following principles:
 
Personal Integrity
    Maintain an attitude of professionalism and integrity by striving to perform at the highest level of one’s proficiency and competency, in order to achieve the highest level of quality.
    Remain honest and never knowingly misrepresent facts.
    Accurately represent one’s own professional qualifications and abilities, and ensure that others receive due credit for their work and contributions.
    Seek and accept honest criticism for one’s work, and take personal responsibility for one’s errors.
    Treat all persons fairly regardless of age, race, religion, gender, disability, sexual orientation, or nation of origin.
    Practice integrity and do not be unduly swayed by the demands of others.
 
Loyalty to One’s Agency
    Safeguard the privacy and confidentiality of restricted information or documents.
    Thoroughly research, analyze, prepare, and disseminate quality work products to the best of one’s ability, ensuring that all reports and documents are accurate, clear, and concise.
    Faithfully adhere to and abide by one’s departmental policies, objectives, and guidelines. Support colleagues in the execution of their lawful duties, and oppose any improper behavior, reporting it where appropriate.
 
Commitment to the Crime Analysis Profession
    Continually strive to increase the recognition of and respect for the profession by participating in professional crime analysis associations, contributing time and effort toward their goals and missions.
    Advocate professional, research-based crime analysis functions within the law enforcement environment.
    Seek training and education throughout one’s career; remain current with trends and practices in crime analysis.
    Contribute to the professional education, training, and development of other crime analysts. Share information and the results of research and development by responding to information requests, submitting information to individuals and organizations, participating in meetings, or publishing articles.
    Present methodologies, results and recommendations through fair, honest, conscientious, independent and impartial judgment.
    Exercise objectivity, impartiality, accuracy, validity and consistency in all research conducted. 
If you are looking for an interesting exercise, have your students or colleagues try to apply all three codes of ethics to this situation.

Wednesday, September 24, 2014

Advanced Analytic Techniques (The Blog) Is Back!

Check out www.advat.blogspot.com!
Each year, I teach a class called Advanced Analytic Techniques (AAT) here at Mercyhurst.  It is a seminar-style class designed to allow grad students to dig into a variety of analytic techniques and (hopefully) master one or two.   

The students get to pick both the topic and the technique on which they wish to focus so you wind up with some pretty interesting studies at the end.  For example, we have applied the traditional business methodology of "best practices" to western European terrorist groups and the traditional military technique of Intelligence Preparation of The Battlefield to the casino industry.

As you can imagine, some of these projects gain a bit of notoriety for their unique insights.  One of my former students, Jeff Welgan, even had his AAT project written up in the book Hyperformance.

Beyond this deep dive that each student is required to do, the class is also designed to teach students how to evaluate analytic techniques for things such as validity and flexibility.  To help with this process, each week we take a quick look at an analytic technique that no one in the class is using in their projects.  

We start this process with a tour d'horizon of the available literature on the method with a particular focus on the literature that is higher up the evidence pyramid and relevant to intelligence analysis.  At the end of the week, one member of the class runs an abbreviated demo of the technique using the other half of the class as guinea pigs.  Once we are done, we all sit down and write up our thoughts about the method.  Last week, for example, we took a (quick) look at SWOT.  This week we will be examining various forms of Red Teaming.

All of this - the summaries and critiques of the articles we have found, and our overall "evaluation" of the technique - gets posted onto the Advanced Analytic Techniques blog each week.  Over the years, the blog has become increasingly popular and I certainly encourage everyone to take a look and, if you have a comment, join in!

Tuesday, September 2, 2014

Three Convergent Thinking Techniques Every Analyst Should Master

"The most important failure was one of imagination." -- 9/11 Report

This sentence and the reforms that it (and others like it) compelled after the attack on the twin towers have driven many of the changes in the way intelligence analysts do their jobs over the last 13 years.

Fundamental to these changes were (and are) attempts to get analysts to think differently. Specifically, most of the discussion and many of the efforts were aimed at increasing divergent thinking abilities among intelligence professionals.  Red teaming, brainstorming, and the ubiquitous informal encouragement to "think outside the box" are all, to one degree or another divergent thinking strategies. 

There are good reasons, however, for analysts to master the flip side of divergent thinking - convergent thinking -- as well.

There is quite a bit of excellent research that suggests that having a strong divergent thinking skillset is not enough.  In fact, the research goes further.  Having only strong divergent thinking skills likely lowers forecasting accuracy.

That's right - lowers.

Psychologists, for example, have long known that having too many choices is not only unproductive but counterproductive. In 2000, Sheena Iyengar and Mark Lepper showed the effects of too many options with respect to consumer products. Participants in their experiments showed more interest in the huge selection of jams with which they were presented but were more likely to actually make a decision and buy one (and to be more satisfied with their purchase) if presented with a smaller assortment.  Don't understand how this works?  Just take a look at the clip with Robin Williams as a recent Soviet emigre in the movie Moscow On The Hudson at the top of this post...

Beyond the realm of jam and much more directly relevant to intel professionals, Philip Tetlock, in his groundbreaking work on the correlates of forecasting accuracy, Expert Political Judgement, found that one popular analytic methodology, Scenarios Analysis, doesn't work at all.  Generating more and more plausible scenarios is actually counterproductive.  His experiments showed that "such exercises will often fail to open the mind of inclined-to-be-closed-minded hedgehogs but succeed in confusing already-inclined-to-be-open-minded foxes... (p. 199 of the 2005 edition for those interested in such things)"

Finally, research conducted by Mercyhurst's own Shannon (Ferrucci) Wasko using a real world intelligence problem and a controlled experiment showed much the same effect:  Divergent thinking alone lowers forecasting accuracy.

What's an analyst to do?

While divergent thinking is useful for developing concepts, ideas or hypotheses, convergent thinking is useful for focusing the analytic effort.  I have found that there are three crucial convergent thinking techniques:

  • Grouping.  Grouping (and its corollary, Establishing Relationships) is probably the most useful of the convergent thinking techniques.  In order to get a handle on all of the ideas that typically emerge from any divergent thinking exercise, it is important to be able to group similar ideas or hypotheses together.  Critical to this effort are the labels assigned to the various groups.  All sorts of cultural and cognitive biases can easily come into play with poorly chosen group names (For example, think how easily the labels "terrorist", "freedom fighter", "good" or "evil" can influence future analysis).  Mindmapping and other concept mapping techniques are very useful when attempting to use grouping as a way to deal with an overabundance of ideas.
  • Prioritizing.  Deciding which ideas, concepts or hypotheses deserve the most emphasis is crucial if collection and analytic resources are to be used efficiently.  Treating every idea as if it is equal to all the others generated by the divergent thinking process makes no sense.  Yet, as with any convergent thinking process, the decision regarding which concept is first among the putative equals should be made carefully.  Problems typically arise when the team setting the priorities is not diverse enough.  For example, a team of economists might well give economics issues undue emphasis. 
  • Filtering.  Filtering, as a convergent thinking technique, explicitly recognizes the awful truth of intelligence analysis - there is never enough time.  Filtering can be used to eliminate, in its extreme application, some possibilities entirely from further consideration.  Typically, however, analysts will use filtering to limit the level and extent of collection activities.  For example, intel professionals looking at pre-election activity in a certain country might decide to focus their collection activities at the county rather than at the city or town level.  As with grouping and prioritizing, where to drawn these kinds of lines is fraught with difficulty and should not be done lightly.
These are just the three techniques that I think are the most important.  There are clearly other convergent thinking strategies that are useful to analysts - don't hesitate leave your favorite in the comments!

Tuesday, August 19, 2014

What Is A Critical Thinker?

Image Courtesy Wade M via Flickr
Just got turned on to Joe Lau's book, An Introduction to Critical Thinking and Creativity (H/T to Edutopia).  I haven't had time to read more than the Introduction (free to download) and review the Table of Contents but it was enough to get me to order the book.

Why?

I really like his definition of critical thinking.  Lau identifies the 10 abilities of a critical thinker and it seems like a pretty comprehensive list to me:

  1. Understand the logical connections between ideas.
  2. Formulate ideas succinctly and precisely
  3. Identify, construct, and evaluate arguments.
  4. Evaluate the pros and cons of a decision.
  5. Evaluate the evidence for and against a hypothesis.
  6. Detect inconsistencies and common mistakes in reasoning.
  7. Analyze problems systematically.
  8. Identify the relevance and importance of ideas.
  9. Justify one's beliefs and values
  10. Reflect on the justification of one's own beliefs and values.
Obviously this is just my first take on it but I think it is worth checking out.


Monday, August 4, 2014

Will This New Game Genre Change Intelligence Training, Education?

Most gamers understand that games fall into genres.  For example, Scrabble, Boggle and my own game, Widget, are all examples of "word games".  There is no standardized list of game genres, of course, but gamers are like Supreme Court Justice Potter Stewart when it comes to genres (Stewart, in trying to define pornography, famously wrote in Jacobellis v. Ohio, "I know it when I see it.").

Most games, then, fit neatly into existing genres and truly new genres come along only rarely.  It is even more rare for a new genre of games to have a large-scale cultural or social impact.  The last such genre that I can think of was the role-playing game, epitomized by the first and still one of the most popular games, Dungeons and Dragons.  Whether you played D and D or not (or liked it if you did play it), there is no denying that it spawned a genre of games that impacted and continue to impact both culture and society.

Today there is a new genre of games - cooperative tabletop games - that I think has a chance to have a similar impact on the way we teach not just intelligence but just about everything.


Cooperative games are labelled as such because players cooperate with each other to defeat the game.  This kind of play style has long been a staple of many video games where players will gather as teams to defeat a common enemy.

While there are a few examples that date back as far as the 1980's, modern cooperative tabletop games typically require much more nuanced gameplay than their video game counterparts.  True cooperation on everything from strategy to resources is usually necessary to defeat these challenging games.  

If you are not familiar with this genre (and most people are not), I strongly recommend you get some of these games and play them.  Two good examples to start with are Pandemic and Forbidden Desert.  Both games pit you and the rest of the players in a race to beat the game.  Either everyone wins or no one wins.

There are many variations on the theme but typically these games throw an escalating series of challenges at the players.  Pandemic, for example, envisions a team of experts working to stop a global disease epidemic.  Forbidden Desert asks players to collect a series of artifacts and escape the desert before sandstorms swallow the players.

Players in these games usually assume a variety of roles, such as Engineer or Medic, each with a particular skill useful in defeating whatever it is the game throws at them.  Players can and do discuss everything from strategy to resource allocation.  This kind of game doesn't just encourage cooperation but demands it from every player.

My recent game, Spymaster (which has proved incredibly popular - I have given out nearly 200 copies to date), was designed as such a game.  Small groups of players have to make collaborative decisions about how and where to place certain collection assets in order to collect various information requirements, all while losing the fewest possible assets.  While the current version of Spymaster allows the players to determine how they will make decisions about asset allocation, I am thinking about an "advanced" version of the game that will assign various roles to the players coupled, of course, with unique capabilities associated with each role.

Whether you have had a chance to play Spymaster or not, once you have played a couple of these kinds of games, the possibilities for their use in class becomes very apparent.  There is a lot of learning going on in these games and not all of it is knowledge-based.  Teamwork, conflict management and collaboration are all essential elements of these games.

More importantly for classroom use, these games can be designed to take a relativity small amount of time to play.  Unlike videogames, tabletop games also tend to expose the underlying system to the players in a bit more detail.  Likewise, tabletop games are vastly less expensive to design and produce than videogames which means that more topics could be covered for the same or less money - clearly a consideration in these budget restricted times.  Finally, bringing a tabletop game into a secure facility is vastly easier than trying to import electrons.

Do I really think that cooperative tabletop games will change intel training and education?  I'm not sure, but I know that they can - and that this is an experiment in games-based learning worth attempting.

Tuesday, July 22, 2014

Realism, Playability And Games In The Intelligence Classroom

A couple of weeks ago, I made a print-and-play version of my new game about collection management, Spymaster, available to anyone who reads this blog and would drop me an email (The offer is still open, by the way, in case you missed it the first time).

Since then, I have mailed out over 100 copies to everyone from the DNI's office to troops deployed in Afghanistan to academics in Japan to the Norwegian police forces!

Feedback is starting to trickle in and the comments have been largely positive (whew!) even from some very experienced collection managers (Thanks!).  In addition, I have received a number of outstanding suggestions for enhancing or improving the game.  Some of these include:

  • Making different collection assets work better or worse against different information requirements.
  • Increasing the point value of information requirements collected early.
  • Making some of the OSINT cards "Burn - 0" or impossible to burn.
  • Giving players a budget and assigning dollar values to each collection asset such that players had to stay within their budget as well.

I recognize that these suggestions may not make much sense if you haven't played the game but all of them (plus many more) are fantastic ideas designed to make the game more real.  And therein lies the rub...

One of the classic problems of games designed to simulate some aspect of the real world is the trade-off between realism and playability.  Playability is really just how easy it is to play the game.  Every time you add a new rule to make the game more realistic, you make the game more difficult to play and therefore less playable.  Its not quite as simple as that but it gives you a good idea of how the problem manifests itself.  Great games designed to simulate reality often give a strong sense of realism while remaining relatively simple but the truth of it is, like the Heisenberg Uncertainty Principle, the more you try to do one, the less, typically, you are able to do the other.

The problem of playability versus realism is analogous to the problem of feature creep in project management.  Most people have been involved in a project that started out simple but, over time, grew incredibly complex as more and more "good ideas" were added.  Each idea, in and of itself, was justifiable but, in the end, led to an unwieldy mess.

Figuring out where to draw the line is just as important in game design as it is in project management.  This constraint is even more strict when considering the modern intelligence classroom.  Here, unless the course is entitled "collection management", there is likely a highly limited amount of time to devote to a game on collection management.  

Consider the case of Spymaster.  I wanted a game which would replace a one-hour lecture on collection management for our intro classes.  To make this work, I would need to be able to set-up the game, explain the rules, play the game and then conduct an outbrief all within an hour.  That's pretty tough to do (at least for me) and still make the game meet your learning objectives.  It becomes a very careful balance of putting good ideas into the game while not running out of time to play the game in class.

The classic solution to this problem is to have a basic version and an advanced version (or several advanced versions).  These can be included in the rules from the outset or added later as expansion packs.  Right now, this is exactly what I am doing with all of the feedback I am receiving - scouring it for good ideas I want to put into more advanced versions of Spymaster!

Wednesday, July 2, 2014

Spymaster - Test My New Card Game About Collection Management!

Last year I was struggling with how to make the classroom discussion of collection management (you know... the allocation of collection assets such as spies and satellites in order to gather required information in a timely manner) more interesting.

Couldn't do it.  

Even people who find the job enormously gratifying (and there are many), seem to have a hard time explaining why they like it so much.  

So...I decided to make a game out of it.

I call the game Spymaster and I have been using it in classes and playing it in my weekly Game Lab for most of the last year.  It seems to work really well both as a game and as a tool for making the challenges of collection management more real to students and young intel professionals.

It plays fast - in about 15 minutes - and is a cooperative game.  For those of you unfamiliar with this term, a cooperative game is one where all the players are on the same side trying to beat the game.  If you have ever played the board games Pandemic or Forbidden Island, you have played a cooperative game).  You can even play it solitaire but I have found it works best with 4-5 players and works really well in a classroom.

I have spent the last week or so cleaning up the game and making it look pretty and writing down the rules and a brief tutorial.  Now I am looking for people who would like to take this "beta" version out for a spin.

If you are interested in receiving a print-and-play version of the game on the condition that you give me some feedback, drop me a line at kwheaton@mercyhurst.edu. If you just want to follow along as I develop the game, check out the Spymaster Facebook Page.

Friday, June 27, 2014

Reviewers Needed For The Visual Analytics Science And Technology (VAST) 2014 Challenge!

The Visual Analytics Community and their member organizations (including the Department of Homeland Security, Pacific Northwest National Laboratories and the Defense Threat Reduction Agency), in coordination with the IEEE, sponsors a visual analytics challenge each year at the IEEE conference for students and researchers.  
In order to judge the output from the participants, the challenge organizers asks for analysts to participate as reviewers of the submissions.  Kris Cook, who is on the contest committee, has asked me to put the word out that the contest needs reviewers for this year’s challenge.  
This is an unpaid, all volunteer effort to assist a non-profit sponsored contest.  Kris’ note to me is reproduced below with additional links.  If you are interested in participating or have any additional questions, please contact her directly.  
For what it is worth, taking a look at the VAST entries is a very interesting and rewarding way to learn what is happening in the world of visual analytics.
Begin text of note:
We invite you to be a reviewer for this year’s IEEE Visual Analytics Science and Technology (VAST) Challenge.  The VAST Challenge poses interesting problems that contestants solve using visualization-based software tools that support analysis of complex data sets.
We are soliciting reviewers for three mini-challenges and a grand challenge this year. 
  • Mini-Challenge 1 challenges participants to identify the current organization of a fictitious terrorist organization and how that organization has changed over time, as well as to characterize the events surrounding the disappearance of multiple people.  Participants will use visual analytics to analyze the specified data.
  • Mini-Challenge 2 challenges participants to describe the daily routines of employees of a fictitious company and to identify suspicious behaviors. This task focuses on the analysis of movement and tracking data and is thus primarily a spatiotemporal challenge.
  • Mini-Challenge 3 challenges participants to identify a timeline of significant events in a fictitious city and identify important participants, locations, and durations by monitoring real-time data feeds. This task poses a streaming analysis challenge.
  • The Grand Challenge asks participants to synthesize the discoveries made across the three mini-challenges to form a high level description of the entire scenario. This task focuses on the identification of who disappeared, who was responsible, and the underlying motivations. Significant information gaps will also be addressed by the participants.
More specific information about the tasks may be found at http://vacommunity.org/VASTChallenge2014.

As a reviewer you will be responsible for reading 3-4 submissions and providing written feedback for the committee and the submitters. Each submission consists of an entry form describing the submitter’s software, their technical approach, and their answers to the mini-challenge questions, as well as a short video showing an example of the analytic processes used by the submitters. 
This year, the reviewing period is as follows:  Entries will be available for review by July 12.  Your reviews will be due by July 28.
All review materials will be accessible over the internet. Reviews will be conducted using the Precision Conference web-based reviewing system. Reviewers will be registered in the Precision Conference system and will submit their reviews using Precision Conference web pages.
If you are interested in reviewing please respond to vast_challenge@ieeevis.org no later than July 1.  Please indicate which mini-challenges you would be most interested in reviewing and how many entries you are willing to review. 
Thank you for your time and consideration!
VAST Challenge Committee

Kris Cook, Georges Grinstein, and Mark Whiting, co-chairs

Thursday, June 26, 2014

Visa-free Access: Who Has The Best Passport?

Having to get a visa is a hassle - just ask anyone who has gone through the process.  Likewise, being able to travel into and out of countries without a visa is a real benefit.  Who then, has the "best" passport?  Which country offers its citizens the most possibilities for visa-free travel?  The answers, in the infographic below (from Movehub), are interesting (H/T to Jeremy!):

World Passport Power

Wednesday, June 18, 2014

New Version Of "How To Get A Job In Intelligence" In the Works And I Could Use Some Help...

A few years ago, I published a series of posts called "How To Get A Job In Intelligence".  It was, without a
doubt, one of the most popular series I have ever written.

Well, I am about to put out an updated version of that series as an e-book on Kindle.

I have been working with my research assistant, McKenzie Rowland, for the better part of three months to update the info and improve the advice (more articles, more links, more hints and tips, more inside info) and I am getting close to launch.

But I could use your help.

I am looking for feedback from people who found some use in the original series.  If you read the original series and found it helpful or informative, I would appreciate it if you would drop me a line:  kwheaton at mercyhurst dot edu.  I would like to talk to you about what worked for you and what did not.  

Thanks!

Monday, June 16, 2014

The Game Every Intel Professional Should Play Is Now Online And Free!

About a year ago, I wrote about one of my favorite games, the old Viking game of Hnefatafl.  I said then I thought it was a game every intel professional should play.

Since then I created an launched an updated tabletop version of the game through Kickstarter called Cthulhu vs the Vikings and a few months ago announced a very rough online version of the game.

Today, the high quality version is available for free on my website, SourcesandMethodsGames.com!

The game is an asymmetric, player vs. player game.  For those of you unfamiliar with gamer-speak, this means that, like games such as Words With Friends, you are playing a real person and not the computer.  It also means that you do not have to sit at the terminal waiting - you can play a few moves, walk away and come back and finish it later.  You can even have up to 5 games going at the same time!

Another unique feature of this games is that you can either play against people you have invited or you can choose the "Quickplay" option which will match you against the next player to come to the game and choose quickplay as well.

It's a great opportunity to play a great game!  Hope you enjoy it!

Tuesday, June 10, 2014

Thinking in Parallel (Part Three - Testing The Mercyhurst Model Against The Real World)

Part 1 -- Introduction
Part 2 -- The Mercyhurst Model

For the last 11 years, I have been using the model described in Part 2 to structure my Strategic Intelligence class at Mercyhurst University.  This is a capstone class for seniors and 2nd year graduate students within the Intelligence Studies program at Mercyhurst.  This class is centered on a real world project for a real-world decisionmaker, often within the US National Security Community.  To date, I have overseen 133 of these types of projects.

The broad parameters of the projects have remain unchanged since 2003.  Students in the class are divided into teams and are assigned by the instructor to one of 4-5 projects available during that term.  Each project is sponsored by a national security, business, or law enforcement organization that has a strategic intelligence question.  To date, sponsors of these questions have included organizations such as the National Geospatial-intelligence Agency, the Defense Intelligence Agency, the National Intelligence Council, the National Security Agency, 66th Military Intelligence Group, and the Navy’s Criminal Investigative Service to name just a few.  To give readers a sense of the wide variety of questions intelligence studies students are expected to answer in this course, I have listed a few recent examples of them below:
1. What role will non-state actors (NSAs) play and what impact will NSAs have in Sub-Saharan Africa over the next five years?o What is the likely importance of NSAs vs. State Actors, Supra-State Actors and other relevant categories of actors in sub Saharan Africa?o What are the roles of these actors in key countries, such as Niger?o Are there geographic, cultural, economic or other patterns of activity along which the roles of these actors are either very different or strikingly similar?o What analytical processes and methodologies were applied to the questions above and which proved to be effective or ineffective? 
2. What are the most important and most likely impacts on, and threats to, US national interests (including but not limited to political, military, economic and social interests) resulting from infectious and chronic human disease originating outside the US over the next 10-15 years?  
3. What are the likely trends in Brazil’s oil/liquid fuel market and electric power sector in the next ten years?  Where will these trends likely manifest themselves?o What energy capacity and security issues are likely to be the most significant to Brazil’s economy in the next ten years?o How will Brazil likely address current and/or future energy security issues over the next ten years?o Where will Brazil address these energy shortfalls?
In each case, students had only 10 weeks to conduct the research, write the analysis and present the final product to the decisionmaker.  The students had no additional financial resources available to them and, other than the question itself, received no support directly from the decisionmaker.  Students rarely had any subject matter expertise in the area under question and were only allowed to use open sources.  Students were expected to integrate lessons learned from all previous intelligence studies classes and to manage all aspects of the project without significant supervision.  Finally, all the students, in addition to this project, were taking a full academic load at the same time.  

After all of the deliverables had been produced and disseminated, the decisionmakers sponsoring the projects were asked to provide objective feedback directly to the course instructor.  This feedback, in turn, was evaluated on a five point scale correlated with traditional grading practices and professional expectations.  In short, a 3 on this scale is roughly equivalent to a "B" and a “4” on this scale is roughly equal to “A” work in a university setting.  A “5”, on the other hand, is the kind of work that would be expected from a working (albeit junior) intelligence professional.  The chart below indicates how the annual averages have changed over time.


(Note:   While this chart may appear to reflect grade inflation more than any other suggested effect, it should be noted that “A” is essentially “average” among Mercyhurst University Intelligence Studies seniors and 2nd Year graduate students and has been for the entire time frame shown above.  The current dropout rate from the program is approximately 50% and much of that is due to a strict 3.0 minimum GPA in order to stay in the program.  As a result, seniors and second year graduate students (the only students allowed to take the class), typically have GPAs that average 3.6 or above.  For example, two years ago, 18 of the top 20 GPA’s in the entire University belonged to Intelligence Studies students.)
Anecdotally, it is possible to state the exact impact of these reports within national security agencies in only a few cases.  For example, the report that answered the question on global health mentioned earlier earned this praise from the National Intelligence Council: 
“Although the Mercyhurst "NIE" should not be construed as an official U.S. government publication, we consider this product an invaluable contribution to the NIC's global disease project: not only in terms of content, but also for the insights it provides into methodological approaches. The Mercyhurst experience was also an important lesson in how wikis can be successfully deployed to facilitate such a multifaceted and participatory research project.”
Likewise, in David Moore’s book Sensemaking:  A Structure for an Intelligence Revolution (published by the National Defense Intelligence College in 2011), the study on non-state actors in sub-Saharan actors produced in answer to the question mentioned above was judged more rigorous than a similar study conducted by the National Intelligence Council (in cooperation with the Eurasia Group).



Beyond the national security community, however, the impact of these reports on various businesses and other organizations is often easier to determine.  For example, senior managers at Composiflex, a mid-sized composites manufacturer, indicated, “We used this project as a seed for our new marketing plan in 2007 and now an industry that we had not even tapped before is 30% of our business.” 

Likewise, Joel Deuterman, the CEO of Velocity.net, an Internet Service Provider, stated, “The analysts discovered that our approach was actually a cutting-edge, developing standard in our industry…What really substantiated the data for us was to see many of our existing customers on the list. Then we knew we could rely on the validity of the ones they had found for us.”  

Even foreign organizations have seen the benefit of these products including Ben Rawlence, the Advisor for Foreign Affairs and Defense in the Whip’s Office of the Liberal Democrat Party in the UK, stating, “The research carried out by your students was first class, and has been of substantial use to Members of Parliament…  It was comprehensive, well sourced and intelligently put together.  I have had no hesitation recommending it to our MPs and Lords in the same way that I recommend briefings provided for us by professional research organisations…” 

While it is possible to imagine more rigorous testing of this model of the intelligence process, the long term success of the process in generating actionable intelligence for a wide variety of customers on a range of difficult problems in a very short time using limited resources is hard to ignore.  More importantly, not only has the process proven itself successful but this success has trended upwards as improvements have been made over the years in terms of structuring the course and teaching material consistent with this approach to the intelligence process.

*****

Intelligence in the 21st century is best thought of as a series of sub-processes operating interactively and in parallel.  

This conclusion, by itself, has significant implications for the training and education of intelligence professionals.  In the first place, it suggests that it is no longer possible to specialize in one area to the exclusion of another.  Intelligence professionals will have to be trained to think more broadly, to be able to jump more fluidly from modeling to collection to analysis to production and back as the process of creating intelligence moves forward over time.  

Likewise, hardware and software support systems will need to be designed that facilitate this leaping back and forth between the various sub-processes.  Designing products that work sequentially in a parallel world will not only frustrate but will also slow down the process of generating intelligence – a result that is absolutely counter to the intelligence needs of modern decisionmakers.  

Finally, as dramatic as this type of change might appear to be, it is, perhaps, better thought of as merely aligning the training and education of intelligence professionals with what it is they already do.

Monday, June 9, 2014

Thinking In Parallel (Part 2 - The Mercyhurst Model)

Part 1 -- Introduction

While a number of tweaks and modifications to the cycle have been proposed over the years , very few professionals or academics have recommended wholesale abandonment of this vision of the intelligence process.  

This is odd.  

Other fields routinely modify and improve their processes in order to remain more competitive or productive.  The US Army, for example, has gone through several major revisions to its combat doctrine over the last 30 years, from the Active Defense Doctrine of the 1970’s to the AirLand Battle Doctrine of the 80’s and 90’s to Network Centric Operations in the early part of the 21st Century.  The model of the intelligence process, the Intelligence Cycle, however, has largely remained the same throughout this period despite the criticisms leveled against it.  The best answers, then, to the questions, “What is the intelligence process?” and “What should the Intelligence process be?” remain open theoretical questions, ripe for examination.

There are common themes, however, that emerge from this discussion of process.  These themes dictate, in my mind, that a complete understanding of the intelligence process must always include both an understanding of intelligence's role in relationship to both operations and the decisionmaker and an understanding of how intelligence products are created.  Likewise, I believe that the process of creating intelligence is best visualized as a parallel rather than as a sequential process.  I call this the "Mercyhurst Model" and believe it is a better way to do intelligence.  More importantly, I think I have the evidence to back that statement up.  

The first of the common themes referenced above is that the center of the process should be an interactive relationship between operations, the decisionmaker and the intelligence unit.  It is very clear that the intelligence process cannot be viewed in a vacuum.  If it is correct to talk about an “intelligence process” on one side of the coin, it is equally important for intelligence professionals to realize that there is a operational process, just as large if not larger and equally important if not more so, on the other side and a decisionmaking process that includes both.

The operational and intelligence processes overlap in significant ways, particularly with respect to the purpose and the goals of the individual or organization they support.  The intelligence professional is, however, focused externally and attempts to answer questions such as “What is the enemy up to?” and “What are the threats and opportunities in my environment?”  The decisionmaking side of the coin is more focused on questions such as “How will we organize ourselves to take advantage of the opportunity or to mitigate the threat?” and “How do we optimize the use of our own resources to accomplish our objectives?”  In many ways, the fundamental intelligence question is “What are they likely to do?” and the decisionmaker’s question is “What are we going to do?”  The image below suggests this relationship graphically.



The second theme is that it should be from this shared vision of the organization’s purpose and goals that intelligence requirements “emerge”.  With few exceptions, there does not seem to be much concern among the various authors who have written about the intelligence process about where requirements come from.  While most acknowledge that they generally come from the decisionmakers or operators who have questions or need estimates to help them make decisions, it also seems to be appropriate for intelligence professionals to raise issues or provide information that was not specifically requested when relevant to the goals and purpose of the organization.  In short, there seems to be room for both “I need this” coming from a decisionmaker and for “I thought you would want to know this” coming from the intelligence professional as long as it is relevant to the organization’s goals and purposes.

Theoretically, at least, the shared vision of the goals and purpose of the organization should drive decisionmaker feedback as well.  The theoretical possibility of feedback, however, is regularly compared with the common perception of reality, at least within the US national security community, that feedback is ad hoc at best.  There, the intelligence professionals preparing the intelligence are oftentimes so distant from the decisionmakers they are supporting that feedback is a rare occurrence and, if it comes at all, is typically only when there has been a flaw in the analysis or products.  As former Deputy Director Of National Intelligence for Analysis, Thomas Fingar (among others), has noted, “There are only two possibilities: policy success and intelligence failure” suggesting that “bad” intelligence is often a convenient whipping boy for poor decisions while “good” intelligence rarely gets credit for the eventual decisionmaker successes.

It is questionable whether this perception of reality applies throughout the intelligence discipline or even within the broader national security community.  Particularly on a tactical level, where the intelligence professional often shares the same foxhole, as it were, with the decisionmaker, it becomes obvious relatively quickly how accurate and how useful the intelligence provided is to the operators.   While most intelligence professionals subscribe to the poor feedback theory, most intelligence professionals also have a story or two about how they were able to give analysis to decisionmakers and how that analysis made a real difference, a difference willingly acknowledged by that decisionmaker.  The key to this kind of feedback seems less related to the issue or to intelligence writ large and more related to how closely tied are the intelligence and decisionmaking functions.  The more distance between the two, the less feedback, unsurprisingly, there is likely to be.

The third theme, is that from the requirement also emerges a mental model in the mind of the intelligence professional regarding the kinds of information that the he or she needs in order to address the requirement.  This model, whether implicit or explicit, emerges as the intelligence professional thinks about how best to answer the question and is constructed in the mind of the intelligence professional based on previous knowledge and the professional’s understanding of the question.  

This mental model typically contains at least two kinds of information; information already known and information that needs to be gathered.  Analysts rarely start with a completely blank slate.  In fact, Phillip Tetlock has demonstrated that a relatively high level of general knowledge about the world significantly improves forecasting accuracy across any domain of knowledge, even highly specialized ones (Counter-intuitively, he also offers good evidence to suggest that high degrees of specialized knowledge, even within the domain under investigation does not add significantly to forecasting accuracy). 

The mental model is more than just an outline, however.  It is where biases and mental shortcuts are most likely to impact the analysis.  It is where divergent thinking strategies are most likely to benefit and where their opposites, convergent thinking strategies such as grouping, prioritizing and filtering, need to be most carefully applied.  One of the true benefits of this model over the traditional Intelligence Cycle is that it explicitly includes humans in the loop - both what they do well and what they don't.

Almost as soon as the requirement gains enough form to be answerable, however, and even if it continues to be modified as a result of an exchange or series of exchanges between the decisionmakers and the intelligence professionals, four processes, operating in parallel, start to take hold: The modeling process we just discussed, collection (in a broad sense) of additional relevant information, analysis of that information with the requirement in mind and early ideas about production (i.e. how the final product will look, feel and be disseminated in order to facilitate communicating the results to the decisionmaker).

The notional graphic below visualizes the relationship between these four factors over the life of an intelligence product.  Such a product might have a short suspense (or due date) as in the case of a crisis or a lengthier timeline, as in the case of most strategic reports, but the fundamental relationship between the four functions will remain the same.  All four begin almost immediately but, through the course of the project, the amount of time spent focused on each function will change, with each function dominating the overall process at some point.  The key, however, is that these four major functions operate in parallel rather than in sequence, with each factor informing and influencing the other three at any given point in the process.



A good example of how these four functions interrelate is your own internal dialogue when someone asks you a question.  Understanding the question is clearly the first part followed almost immediately by a usually unconscious realization of what it would take to answer the question along with a basic understanding of the form that answer needs to take.  You might recall information from memory but you also realize that there are certain facts you might need to check out before you answer the question.  If the question is more than a simple fact–based question, you would probably have to do at least some type of analysis before framing the answer in a form that would most effectively communicate your thoughts to the person asking the question.  You would likely speak differently to a child than you would to an adult, for example, and, if the question pertained to a sport, you would likely answer the question differently when speaking with a rabid fan than to a foreigner who knew nothing about that particular sport.

This model of the process concludes then where it started, back with the relationship between the decisionmaker, the intelligence professional and the goals and purposes of the organization.  The question here is not requirements, however, but feedback.  The intelligence products the intelligence unit produced were, ultimately, either useful or not.  The feedback that results from the execution of the intelligence process will impact, in many ways, the types of requirements put to the intelligence unit in the future, the methods and processes the unit will use to address those requirements and the way in which the decisionmaker will view future products.

This model envisions the intelligence process as one where everything, to one degree or another, is happening at once.  It starts with the primacy of the relationship between the intelligence professional and the decisionmakers those professionals support.  It broadens and redefines, however, those few generally agreed upon functions of the intelligence cycle but sees them as operating in parallel with each taking precedence in more or less predictable ways throughout the process.  This model, however, explicitly adds the creation and refinement of the mental model of the requirement created by the intelligence unit as an essential part of the process.  This combined approach captures the best of the old and new ways of thinking about the process of intelligence.  Does it, however, test well against the reality of intelligence as it is performed on real-world intelligence problems?

Part Three - Testing The Mercyhurst Model Against The Real World

Friday, June 6, 2014

Thinking In Parallel: A 21st Century Vision Of The Intelligence Process

(Note:  I recently was asked to present a paper on my thoughts about re-defining the intelligence process and the implications of that redefinition on education, training and integration across the community at the US Intelligence Community's Geospatial Training Council's (CGTC) conference in Washington DC.  For those familiar with my earlier work in the intelligence cycle and the damage it is causing, you will find this paper shorter and less about the Cycle and more about the alternative to it I am proposing (and the evidence to support the adoption of that alternative...).  Enjoy!)


Abstract:  Effective integration and information sharing within the intelligence community is not possible until the fundamental process of intelligence is re-imagined for the 21st Century.  The current model, the Intelligence Cycle, developed in World War 2 and widely criticized, has outlived its useful life.  In fact, it has become part of the problem.  This paper abandons this sequential process that was appropriate for a slower and less information rich environment.  Instead, a more streamlined parallel process is proposed.  Accompanying this new vision of the intelligence process will be an analysis of data collected from over 130 real-world intelligence projects conducted using this model of the intelligence process and delivered to decisionmakers in the national security (including GEOINT), law enforcement and business sectors.  Additionally, the training and education implications as well as the kinds of software and hardware systems necessary to support this new understanding of the process are discussed.

Part 1 -- Introduction

"We must begin by redefining the traditional linear intelligence cycle, which is more a manifestation of the bureaucratic structure of the intelligence community than a description of the intelligence exploitation process." -- Eliot Jardines, former head of the Open Source Center, in prepared testimony in front of Congress, 2005  
"When it came time to start writing about intelligence, a practice I began in my later years at the CIA, I realized that there were serious problems with the intelligence cycle.  It is really not a very good description of the ways in which the intelligence process works."  Arthur Hulnick, "What's Wrong With The Intelligence Cycle", Strategic Intelligence, Vol. 1 (Loch Johnson, ed), 2007
"Although meant to be little more than a quick schematic presentation, the CIA diagram [of the intelligence cycle] misrepresents some aspects and misses many others." -- Mark Lowenthal, Intelligence:  From Secrets to Policy (2nd Ed.,2003) 
"Over the years, the intelligence cycle has become somewhat of a theological concept:  No one questions its validity.  Yet, when pressed, many intelligence officers admit that the intelligence process, 'really doesn't work that way.'" -- Robert Clark, Intelligence Analysis:  A Target-centric Approach, 2010



Academics have noted it and professionals have confirmed it:  Our current best depiction of the intelligence process, the so-called "intelligence cycle", is fatally flawed.  Moreover, I believe these flaws have become so severe, so grievous, that continued adherence to and promotion of the cycle is actually counterproductive.  In this paper I intend to briefly outline the main flaws in the intelligence cycle, to discuss how the continued use of the cycle hampers, indeed extinguishes, efforts to effectively integrate and share information and, finally, suggest an alternative process – a parallel process – that, if adopted, would transform intelligence training and education.

*****

Despite its popularity, the history of the cycle is unclear.  US army regulations published during WWI identify collection, collation and dissemination of military intelligence as essential duties of what was then called the Military Intelligence Division but there was no suggestion that these three functions happen in a sequence, much less in a cycle.

By 1926, military intelligence officers were recommending four distinct functions for tactical combat intelligence:  Requirements, collection, "utilization" (i.e. analysis), and dissemination, though, again, there was no explicit mention of an intelligence cycle.

The first direct mention of the intelligence cycle (see image) is from the 1948 book, Intelligence Is For Commanders.  Since that time, the cycle, as a model of how intelligence works, has become pervasive.  A simple Google image search on the term, "Intelligence Cycle" rapidly gives one a sense of the wide variety of agencies, organizations and businesses that use some variant of the cycle.

The Google Image Search above highlights the first major criticism of the Intelligence Cycle:  Which one is correct?  In fact, an analysis of a variety of Intelligence Cycles from both within and from outside the intelligence community reveals significant differences often within a single organization (See chart below gathered from various official websites in 2011).
While there is some consistency (“collection”, for example, is mentioned in every variant of the cycle), these disparities have significant training and education implications that will likely manifest themselves as different agencies attempt to impose their own understanding of the process during joint operations.  Different agencies teaching fundamentally different versions of the process will likewise seriously impact the systems designed to support analysts and operators within agencies.  This, in turn, will likely make cross-agency integration and information sharing more difficult or even impossible.




The image also highlights the second major problem with the cycle:  Where is the decisionmaker?  None of the versions of the intelligence cycle listed above explicitly include or explain the role of the decisionmaker in the process.  Few, in fact, include a specific feedback or evaluation step.  From the standpoint of a junior professional in a training environment (particularly in a large organization such as the US National Security Intelligence Community where intelligence professionals are often both bureaucratically and geographically distant from the decisionmakers they support), this can create the impression that intelligence is a “self-licking ice-cream cone” – existing primarily for its own pleasure rather than as an important component of a decision support system.

Finally, and most damningly (and as virtually all intelligence professionals know):  “It just doesn’t work that way.”  The US military's Joint Staff Publication 2.0, Joint Intelligence (Page 1-5), describes modern intelligence as the antithesis of the sequential process imagined by the Cycle.  Instead, intelligence is clearly described as fast-paced and interactive, with many activities taking place simultaneously (albeit with different levels of emphasis):

"In many situations, various intelligence operations occur almost simultaneously or may be bypassed altogether. For example, a request for imagery requires planning and direction activities but may not involve new collection, processing, or exploitation. In this case, the imagery request could go directly to a production facility where previously collected and exploited imagery is reviewed to determine if it will satisfy the request. Likewise, during processing and exploitation, relevant information may be disseminated directly to the user without first undergoing detailed all-source analysis and intelligence production. Significant unanalyzed operational information and critical intelligence should be simultaneously available to both the commander (for time-sensitive decision-making) and to the all source intelligence analyst (for the production and dissemination of intelligence assessments and estimates). Additionally, the activities within each type of intelligence operation are conducted continuously and in conjunction with activities in each intelligence operation category. For example, intelligence planning (IP) occurs continuously while intelligence collection and production plans are updated as a result of previous requirements being satisfied and new requirements being identified. New requirements are typically identified through analysis and production and prioritized dynamically during the conduct of operations or through joint operation planning.”

The training and education implications of this kind of disconnect between the real-world of intelligence and the process as taught in the classroom, between practice and theory, are both severe and negative.  

At one end of the spectrum it is as simple as a violation of the long-term military principle of “Train as you will fight”.  Indeed it is only questionable as to which approach will be more counterproductive:  Forcing students of intelligence to learn the Cycle only to realize after graduation and on their own that it is unrealistic or throwing a slide of the Cycle up on the projector only to have an experienced instructor indicate that “This is what you have to learn but this isn’t the way it really works.”  Both scenarios regularly take place within the training circles of the intelligence community.

At the other end of the spectrum, the damage is much more nuanced and systemic.  Specifically, intelligence professionals aren’t just undermining their own training, they are miscommunicating to those outside the community as well.  The effects of this may seem manageable, even trivial, to some but imagine a software engineer trying to design a product to support intelligence operations.  This individual will know nothing but the Cycle, will take this as an accurate description of the process, and design products accordingly.  

In fact, it was the failure of these kinds of software projects to gain traction within the Intelligence Community that led Georgia Tech visual analytics researcher Youn-ah Kang and her advisor, Dr. John Stasko, to undertake an in-depth, longitudinal field study to determine how, exactly, intelligence professionals did what they did.  While all of the results of their study are both interesting and relevant, the key misconception they identified is that “Intelligence analysis is about finding an answer to a problem via a sequential process.”  In turn, the failure to recognize this misconception earlier resulted in a failure of many of the tools they and others had created.  In short, as Kang and Stasko noted, “Many visual analytics tools thus support specific states only (e.g., shoebox and evidence file, evidence marshalling, foraging), and often they do not blend into the entire process of intelligence analysis.

Next:  Part 2 -- The Mercyhurst Model