Are the wargames conducted for the Department of Defense good enough? That question has been getting some attention lately. It might not be the right question.
Wargame sponsors do not need a better widget, a smoother experience, or a perfectly designed wargame. The officers who request CNA wargames are very rarely interested in the perfect methodology for their problems. They need real, actionable recommendations on very hard, ill-defined problems, and they needed those solutions “yesterday.” Both defense analysts like myself and the DOD sponsors who use our services have limited time and attention to devote to finding the right solution.
The question of whether the quality of defense wargaming is good enough — or indeed whether wargaming actually works — was most recently raised by two analysts, Yuna Wong and Garrett Heath. This line of inquiry is valid, but it fails to consider the broader ecosystem in which wargaming resides. Focusing on the methods internal to wargames risks missing the point of holding wargames at all.
Adding more layers of bureaucracy and quality control does not solve the problems of those seeking answers through wargames. Outcomes are the important issue. Do our wargames provide a window into a useful reality? If so, we’ve done our jobs correctly. In fact, raising methodology — how we wargame — with our DOD sponsors not only misses the point of defense analysis, it clouds our conversations with them.
Debates over the definitions of a wargame can actually distract the military from more important questions. One command asked CNA for help with a big, underspecified problem. As part of the proposed study plan, CNA recommended a large tabletop exercise. The analysts at the command later noted that the tabletop exercise had been renamed as an “event.” Asked why, they said they wanted to avoid an argument with participants over the definition of a wargame versus a tabletop exercise. In other words, as part of the decision calculus to solve a very hard problem, a sponsor was taking the time to weigh pushback from participants over whether an event met the right “checkboxes” to qualify as a wargame — instead of trying to solve their important problem.
Involving DOD in the arguments over how to build a better wargame widget does not answer the questions they face from the whole of government. Congress recently asked one of CNA’s sponsors how wargames have affected long-term procurement and plans. This question is hard to answer, and the sponsor reached out to CNA for help. What the sponsor needed was a statement about effects, not the methods of each of the wargames. Focusing on the methods would only detract from the effort to answer the inquiry.
If a focus on verifying that wargames use the “right” tools can distract decision-makers from actually solving defense problems, what’s the effect on the analysis community?
Discussing methods, rigor and proper approaches to analysis is key to what we, as defense analysts, do. But when that conversation overwhelms and supplants our efforts to address the broader problems that the Department of Defense must solve, wargamers and analysts have failed in their jobs. There are dangers in mistaking rigorous wargaming for effective wargaming.
As many defense analysts are fond of saying, “All models are wrong, some models are useful.” The usefulness of a wargame is not in its methods, but in its outputs.