Rand Analyses of Air Force Security Cooperation efforts

The first paper is:

(hat-tips to Mike Markowitz and Chris Weuve)

Summary

This study looks at two Unified Endeavor Building partnerships (UEBP) seminars held in Sweden and Estonia (2009 and 2010 respectively). The UEBP series looks to get contributions from countries not normally participating in the full up UE title X wargame. The Study used RAND’s security cooperation assessment framework to assess the seminars. There are som interesting specific findings about the two events, but the process discussion is of more relevance to this audience. The top level recommendations include:

institutionalize the BP seminar program with an Air Force instruction or other authoritative documentation.

Provide measurable objectives for each seminar by establishing clearer links to combatant command guidance.

Reducing costs through broader stakeholder involvement in initial event planning.

Capture seminar insights, indicate relevance and value and demonstrate importance and effectiveness of the seminars to stakeholders through an after-action reporting process.

Assess the extent to which an event met its objectives, identify necessary areas for follow-up, and inform planning for future events by developing and implementing follow-up mechanisms such as post-event interviews and participant surveys.

Utlilize RAND’s security cooperation framework (SCF) in the above activities to enable a global, strategic view of BP efforts, rather than focusing on individual components.

So what is the SCF? It appears to represent a distillation of generic assessment best practice across a wide swath of assessment types. A summary follows:

The RAND SCF has 6 key elements: Guidance, Programs, Stakeholders, Authorities, Five levels of assessment, and indicators and metrics for assessment. The 5 levels of assessment are (in inverse order of “layers”:

5. Cost effectiveness,
4. Outcomes and effects,
3. Process and implementation,
2. Design and Theory, and
1. Need for Program (i.e requirements)

Questions are then developed that relate to these levels of assessment such as:

5. Did the US pay costs that partners should have paid?
4. Were U.S. – partner relationships strengthened?
3. Were the participants satisfied with the interaction?
2. What kinds of activities yielded the best results?
1. Does the U.S still need to improve or maintain relations with the chosen partners?

As with the identification of analysis questions in analytic efforts, this is often where trouble starts. Bad Questions. Compounded when the indicators and metrics are also developed by “brainstorming” rather than derived from theory. Such a theory is almost universally lacking, both in security cooperation circles as in analysis circles, with the added problem here that partner countries often have agendas and incentive to “tell data collectors what they want to hear”.

Stakeholders are then identified to answer questions related to their area of expertise and interest, for example:

AF/A3/5 (AF Deputy Chief of Staff, Ops, Plans and Requirements): Levels 1, 2, and 5
Component Commands: Levels 2 and 3
SAF/IA: (Deputy, Undersec of AF for International Affairs): Level 4

Stakeholder roles are then identified as Data Collectors (AF/A3/5 and Component Commands), Assessors (AF?A3/5 and Component Commands), Reviewers (SAF/IA and AF/A3/5) and Integrators (SAF/IA).

The integrator’s role is to take the results of the assessment and integrate them into appropriate policy decisions. It is interesting that the SAF/IA organization and not the Component commands are the sole integrators says a lot about the inherent bias toward process, rather than activity as the primary type of decision-making to be influenced.

The paper is telling both from what it says about issues with the subject, but also about what the analysis says about assumptions, preconceived notions and bias on the part of the ones writing. Any set of recommendations that ends with “recommend you institute my idea as your policy” should tear the BS flag raising it so fast…

See the report for the details of the analysis of the two examples.

The 2nd paper is

Summary

This report is intended to give Air Force planners a clearer understanding of the programs available for working with partner countries around the world. The report provides Air Force planners with a better understanding of aviation resources for security cooperation, the rules governing use of those resources, and their application methods. It does so via a construct, tied to U.S. strategic objectives, that illustrates how these resources can be employed in varying situations with different types of partner countries. Specifically, the report identifies programs available to USAF planners, including their purpose, authorities, resources, regional focus, and key points of contact. It also provides a construct for employing those programs, taking into account the partner’s relationship with the United States, and considers in detail the most appropriate types of assistance, given a partner’s willingness and capacity to work both with the United States and in a regional context.

The paper again works from the RAND Security Cooperation Framework. It suggests that the use of the SCF “can support decisions to adjust, expand, contract, or terminate a program. Assessments can support decisions regarding what services a program should deliver and to whom. Assessments can support decisions about how to manage and execute a security cooperation program.” It clarifies that the 5 “levels” are nested levels of assessment, the success of a higher level is predicated on success at the lower levels. It adds to the SCF assessment piece, preliminary planning and resourcing steps, and subsequent training steps.

For the SC planner the paper suggests an 8 – step process:

(1) Identify the purpose,
(2) identify relevant security cooperation programs,
(3) conduct an analysis of potential partners’ operational and technical utility in order to
(4) identify the most relevant partners,
(5) conduct an analysis of potential partners’ political-military characteristics in order to
(6) select the most relevant and appropriate partners,
(7) match partners with appropriate security cooperation activities and programs, resulting in
(8) the key components of the security cooperation plan.

Part of the SC challenge is the lack of transparency of the process throughout the government:

“Currently, no process, single organization, or database systematically tracks all these programs and activities. The COCOMs track DoD activities within their areas of responsibility through the Theater Security Cooperation Management Information Systems (TSCMIS), but not all programs (i.e., many Service-level programs) are included. Typically, the programs of U.S. government civilian agencies are not included in any comprehensive way. The USAF tracks its activities globally, but not systematically. Again, the activities of the other Services, at both the Joint and government level, are not included in the TSCMIS. The result is a massive information jumble, making USAF planning for security cooperation a real challenge.”

To make matters worse:

“Even if all the information were made available, however, the most astute USAF planner would likely still find it difficult to track all the activities in a given partner country or region. It is unrealistic to expect USAF security cooperation planners, many of whom may be new to their jobs and still learning it, to be aware of what programs exist across the USAF, DoD, and U.S. government. It is equally unrealistic to expect them to know all the authorities and legalities that govern the use of those resources, let alone how to apply and sequence them to achieve real objectives.”

The analytic construct to guide the planning portion of the process is shown here:

RAND CSF planning analytical construct

Executing this process requires “knowing one’s community and counterparts well, as suggested by the interfacing plans in Figure S.1, and ensuring the interchange of information relevant to ongoing planning efforts. Moreover, the importance of this network increases exponentially because security cooperation resources are so dispersed. The most successful security cooperation planners across the U.S. government tend to be those who have built and are able to sustain a solid network of colleagues and contacts.”

Coountry Plans Relationships

The third piece of the construct is the identification of three categories of activities: nascent, developing, and advanced:

Relationships among SC activities

The chapter describing the SCF defines assessment in an interesting way:

Assessment is research or analysis to inform decision-making. When most people think of evaluation or assessment, they tend to think of outcomes assessment: Does the subject of the assessment “work”? Is it worthwhile? While outcomes are certainly within the purview of assessment, assessments cover a much broader range and can be quite varied.

Most assessments are conducted using research methods common in the social sciences. However, evaluation and assessment can be distinguished from other forms of research in their purpose. Assessment is fundamentally action-oriented. Assessments are conducted to determine the value, worth, or impact of a policy, program, proposal, practice, design, or service with a view toward making change decisions about that program or program element in the future. In short, assessments must be explicitly connected to informing decision-making.” (italics in original)

Challenges to performing assessments discussed in the report include:

Determining Causality – “In many instances, the best we can hope for at the outcomes level is to find some relationship between success in security cooperation programs and progress within security cooperation focus areas.”

Articulating intermediate goals to inform decision-making (in the interim of achieving broader goals) – “However, it is analytically very difficult to tell whether or not something is working when causal connections are conflated with other activities or end states and goals are high-level, opaque, difficult to measure, or require only that a program or activity contributes indirectly.”

(Lack of) Assessment capabilities of AF stakeholders – “Resource constraints can adversely impact the quality of data collection.”

Multiplicity of and differing priorities of stakeholders – “Decisions for and about these programs are made by many different organizations and at many different levels.”

Data Systems are not organized to support assessment – “Some Air Force–specific data are maintained in Knowledgebase and the COCOMs’ respective TSCMISs, but not all security cooperation stakeholders provide inputs, nor do they all have access to these systems.”

Confusing Terminology – “A certain consistency is essential if Air Force organizations are to be able to manage assessments over time as the guidance changes. For example, how might one know if goals and end states are one in the same or different? Are “goals” and “ends” equivalent? What are the differences between “outputs” and “outcomes”?”

Delegating Assessment responsibilities (to too low a level) – “…many of the officers and staffers charged to perform the assessments have operational backgrounds; they are not trained to design and perform assessments.”

Expectations and Perceived Notions of Assessment “Further, the idea that assessment adds limited value or that it is required merely to satisfy curiosity rather than to inform essential decisions can lead to superficial evaluations or create resistance to assessment proposals.”

Chapter 3 is a vignette demonstrating how to apply the process. Appendix A is a list of ‘program pages” of the 97 different programs that apply to AF SC activities.

And once again the paper recommends:

Fourth, the USAF should consider publishing this primer, or selected parts of it, as an Air
Force handbook or manual
. This undertaking would enable the data and construct to reach a
much wider audience. (italics in original).

So what? Why is this stuff important to wargamers?

First, nearly all the issues apply not just to the “real-world” processes, but to the representation of these processes in wargames. Assessment problems of real-world SC activities have corollaries in the assessment of SC activities in wargames. Planning issues for the real world are applicable in planning for wargames.

Second, the challenges discussed are not only challenges to SC assessment, but to assessment tasks in general (a key facet of wargaming).

In the first paper, the level of maturity of the processes described indicates either a lack of expertise in wargaming disciplines on the part of the planners, implementors and analysts for these events, or a lack of leadership support for taking such requirements seriously. The Navy is not any better off in this regard. I’m not very knowledgable about the Army, who like the AF have an Operations research/Systems Analysis MOS – but the extent to which senior members engage senior decision-makers is not known. How much of the ORSA curriculum and career path focuses on wargaming is important (anecdotal info says not that much).

The RAND SCF represents a ” step out of the starting blocks” regarding assessment of security cooperation efforts, but the types of questions proposed do not reflect a very high level of sophistication, given the claimed utility (ie appears from this case to be program based (level 1), not Outcome based (level 4)). It does not appear to support the ability “to enable a global, strategic view of BP efforts, rather than focusing on individual components.” Level 1 in such a program would be the identification of combatant commander desires and guidance, then work to an engagement “campaign” of programs that contribute to those desired conditions.

The analytic construct tries to get at that, in a formulaic, associative sort of way (I have 98 programs, which of them have something to do with the general topic of my requirement regarding my “relevant potential partners”. Even the standard MDMP is a better basic approach than this… To the extent that the example hones in on an AF specific problem (gee – a “strategic” sortie generation rate and ISR coverage problem) one doesn’t need a sophisticated assessment process to figure out “will (insert country here) let us use their airfield and set up a mobile radar station”. To the extent that all the facets of the complicated assessment methodology are needed, it is less likely that this is an AF specific issue (ie why can’t the Army of Navy provide a capability or leverage their SC apparatus to acquire at least some of the needed capabilities?). Lets maybe consider that the host nation might be “supported” in an SC effort by us, and not the other way around?

There is also a bias toward analytic decomposition of a problem into its solution space, and then synthesizing a solution from elements of that space. This assumes that the world has a strong Newtonian determinism streak to it, allowing the “operating environment” to be held constant while we work to solve the problem of our particular dependant variable. More on this in a subsequent post…

Advertisements

About Paul Vebber

"If you read about something, you have learned about it. If you can teach something, you have mastered it. Designing a useful game about something however, requires developing a deep understanding of how it relates to other things."
This entry was posted in Analysis, Design, Planning and Execution. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s