“Are we winning?” That simple question has proven decidedly difficult for the U.S. military to answer at various times in its history, and in recent decades has given rise to formal processes for trying to measure progress in warfare—a practice that has become known as “operations assessment.” Over the past decade, CNA analysts have found themselves increasingly involved in operations assessment, as has the broader military operations research community. Yet, to our knowledge, there is no comprehensive account of the history of operations assessment. This paper aims to serve as a primer on the subject to provide context on how operations assessment has grown and developed over the past several decades. We hope that it serves as a foundation and source of encouragement for a more robust history on this topic.
Although the genesis of what we now refer to as operations assessment began in World War II with the growth of operations research, the Vietnam era saw the emergence and practice of the concept take on a significance that was unprecedented. As the United States became immersed in a counterinsurgency fraught with ambiguities, nascent operations research and systems analysis (ORSA) analysts, among others, strove to find ways to measure progress on the ground. The main driving force behind this effort was Secretary of Defense Robert McNamara. A strong proponent of systems analysis and other quantitative methods, McNamara drove the idea that data collected on the ground could be used to develop accurate and precise measures of progress.
McNamara’s approach took the form of three primary assessment models: body counts, the Hamlet Evaluation System, and the Southeast Asia Analysis Reports. Each depended on highly quantitative inputs to produce measurements that reflected the degree to which the U.S. military was meeting its objectives in Vietnam. While this effort represented a highly innovative approach to a very difficult problem, the process proved highly unreliable as an accurate reflection of progress. The emphasis on quantitative measures obviated the inclusion of qualitative judgment and ignored the disparate nature of the environment in Vietnam, failing to account for nuance and context. By the end of the war, policymakers and Defense leaders were wholly disenchanted with the assessments process and summarily dismissed it as invalid and useless.
In the years following Vietnam and leading up to the end of the Cold War, doctrine and thinking on operations assessment was minimal and tended to revert back to pre-Vietnam methods like battle damage assessment and the commander’s estimate. The explanation for this lack of emphasis lies primarily in the geopolitical dynamics that characterized the Cold War. Because the U.S. defense posture was oriented almost exclusively toward countering the Soviet Union, it became much more important to compare and measure the capabilities of the two nations’ militaries, rather than the effectiveness of their operations. Since the two superpowers never actually engaged in “hot” conflict, the perceived need for a robust operations assessment process was virtually nonexistent. Coupled with the backlash against the highly quantitative Vietnam-era approach, the later years of the Cold War saw operations assessment fall into relative obscurity.
With the collapse of the Soviet Union, the United States took on a new role as the sole superpower, and with that shift came new global responsibilities. U.S. engagement around the world began to grow and the need for an assessments process re- emerged. At the same time, the world was entering the Information Age, where vastly improved computing capabilities were prompting sweeping changes in every sector, including defense.
With these changes came several new conceptual undercurrents that would ultimately influence the development of a new approach to operations assessment. Network-centric warfare, the “Revolution in Military Affairs,” and, ultimately, effects- based operations all grew out of the idea that advanced technologies would usher in a new era in warfare—one that made war almost, if not entirely, calculable. For operations assessment, these changes signaled a similar shift in thinking, back to more quantitative approaches. The emergence of effects-based assessment, which emphasized the use of measures of performance and measures of effectiveness— expressed quantitatively—to determine the level of progress, became the dominating framework for operations assessment.
The wars in Iraq and Afghanistan have brought operations assessment significance and attention unparalleled in U.S. military history. Initially relying on effects-based assessment to provide the necessary framework, assessors in both conflicts met with hindrance after hindrance as the shortcomings of this approach became more apparent. The disconnect between counterinsurgency theory and the assessments process that had plagued operations assessment in Vietnam re-emerged and the result has been equally frustrating. The promise of technological advancement and the effects-based framework to help make sense of the vast amount of data coming from both theaters has fallen short. Once again, the failure of the process to account for local context and the pitfalls in trying to quantify complex dynamics has made the production of accurate and useful assessments a persistently elusive aim.
This brief history of operations assessment has revealed important trends and provides insight into how the process can be improved in the future. The oscillation between quantitative and qualitative inputs and approaches to assessment has been a persistent theme. Also, the emphasis on operations assessments seems to ebb and flow according to the level of demand for them, especially from high-level Defense Department leadership. The level of demand, in turn, seems proportional to a lack of obvious progress. When progress is difficult to demonstrate and communicate, which is especially true in counterinsurgency, there is a greater demand for accurate assessments. Conversely, in eras like the Cold War, where the focus resided in pre- conflict comparisons of forces, demand for operations assessment was almost nonexistent.
Recent trends in operations assessment include some high-level commanders’ use of assessments to help bolster the message they wish to communicate to higher leadership and public audiences. In an effort to gain the support of policymakers and other civilian audiences, some commanders have selected metrics and molded assessments in ways that paint a picture of progress consistent with their intended message. This development has had significant implications for operations assessment, and the processes at numerous high-level commands have undergone substantial overhauls as subsequent commanders seek an appropriate balance in satisfying the high-level civilian audience while building assessments that still provide meaningful inputs to their commands’ internal planning processes.
The recent high level of demand for assessments has brought about what General James Mattis referred to as a “renaissance” in operations assessment. Currently there is great momentum behind efforts to build a workable and effective framework for assessments, which has given rise to revisions of major U.S. doctrinal publications on the subject. There is great opportunity at this time to take lessons learned from operations assessment history and apply them to the development of processes that avoid the problems of the past and anticipate future considerations.
Challenges such as “big data” and limited analytic capacity, increased emphasis on operations assessment amidst discord as to how to conduct it, demands of competing audiences, lack of training and education for practitioners, and the uncertainties and ambiguities of warfare will continue to make the conduct of operations assessment a challenging endeavor. Yet, if we are to avoid adding another chapter of failing and frustration to the history of operations assessment, it is imperative for the U.S. military to seize the momentum and opportunity that currently exists to re-examine and redesign effective means of assessing progress in future operations.
Download reportDistribution unlimited.
Details
- Pages: 56
- Document Number: DOP-2014-U-008512-1rev
- Publication Date: 9/3/2014