It’s February 1962, inside the Pentagon. A team of 45 under the command of the US Joint Chiefs of Staff are preparing to run a game designed to simulate the outcome of growing tensions in Southeast Asia, including a possible American intervention in Vietnam. Two teams were drawn up – blue for friendly, red for enemy – and play commenced. The friendlies lost.
The Pentagon, however, did not stop there. The game was the first in a series known as Sigma, held throughout the 1960s, to create strategies and model possible outcomes for the Vietnam War. The games were so realistic that they predicted numerous events that actually unfolded in the real world, including the capture of an American pilot in June 1964, the introduction of American infantry in early 1965 to defend air assets, and the removal from office of General Nguyen Khanh through public pressure.
Almost all the games predicted a communist victory. The Sigma simulations (or wargames) were, in the words of US Lieutenant General HR McMaster, eerily prophetic. As the rise of Chinese power threatens US military and technological pre-eminence, the use of military simulations to create and inform strategy is coming back into vogue.
Wargames, in a nutshell, are analytic games that simulate war at the tactical, operational, and strategic levels. Two teams face off against each other and the results are used to assess new warfighting concepts, train commanders, and test the impact of hypothetical new technologies. They appeal to military strategists and planners because games like Sigma offer a glimpse of the holy grail – a prediction of the future.
The interactions of two self-interested players in a game allows strategists to answer two types of questions: confirmatory questions, which use simplifications of the real world to generate a hypothesis for further research outside the game; and exploratory questions, which illustrate how variables influence outcomes. For example, running multiple games and changing one variable at a time (environments, military capabilities, adversaries) illustrates how each of those variables change friendly and enemy decision-making processes.
Most wargames, whether confirmatory or exploratory, follow a simple logic: put two self-interested actors in a room with certain capabilities, see what they do, and then draw the relevant lessons. Did one actor surprise the other, and how? Is there a pattern in their actions that indicates the likelihood of that surprise occurring? Does a new capability work well in multiple scenarios? If we change the capability, the actors, or the scenario, what happens?
Ends vs means
These are all means-end questions – set the means and observe the ends. This approach is similar in principle to giving two children one cookie, telling them to divide it equally, and seeing what happens. Most parents will tell you that the child holding the cookie will break themselves the larger piece.
What happens, however, if we were to set up the game in a way that guaranteed the outcome we want?
In the cookie scenario, we’d set up the interaction to incentivise both children to work towards the desired result. One child breaks the cookie and the other gets to choose the first piece. The child with the cookie is now motivated to break it as evenly as possible. We have made two self-interested actors achieve the outcome we want by working backwards from the end goal to create the means that make it possible.
This is a simple example of an approach known as mechanism design. It has been hugely influential in the field of economics (winning a Nobel Prize) and has been used to ‘reverse-engineer’ policies ranging from Singaporean housing regulations to auctions.
“Mechanism design is a branch of microeconomics that focuses on how to construct a game, a set of rules, that cause the agents strategically interacting in that game to play an equilibrium that is desirable to the mechanism designer,” Professor of Economics at UNSW Richard Holden told ADM.
“It’s interesting in situations where the players have some private information that the mechanism designer doesn’t have.”
A wargame, on paper, is one such situation. Two or more self-interested players are interacting using information that is not available to the designer – namely, their beliefs, assessments, and intentions. The question, then, is whether we can mechanism design a wargame.
If we can cause the agents to play to an equilibrium that favours one side, then perhaps it is possible to guarantee success regardless of how the game pans out. What if Sigma was designed to ensure an American victory? The design of the game may then have informed actual US strategy in Vietnam and avoided the forthcoming catastrophe.
In short, can we reverse-engineer a war-winning strategy?
Designing a wargame
The idea is not fanciful. The US Defense Advanced Research Projects Agency (DARPA) put out a request a few years ago for experts to contribute to a mechanism-designed wargame.
Specifically, DARPA sought to design “rules, norms, and structural factors that incentivize other state or non-state actors to act in such a way that a desired strategic outcome for a single actor is realized.”
To be clear, this does not simply mean rigging the game. It is about creating the right conditions.
“You can always rig the rules in favour of one player,” Professor Holden said. “But there’s a big difference between the rules of the game and the strategy that people play. The rules of chess, say, are where the pieces can move, but a strategy is whether I move this piece here or there.”
Mechanism design, then, is not about creating new rules of chess that consistently allow the black player to win. It is about playing within the existing rules, but somehow incentivising white to adopt a strategy that falls into black’s hands.
This is where it gets complicated. The factors that might incentivise a state to play to an equilibrium that realises the desires of another range from economic and trade structures to military posture, diplomatic relations, and infrastructure. Moreover, DARPA acknowledged that “theory for designing mechanisms that fully utilize these many degrees of freedom is largely unexplored.”
An additional layer of complexity comes from the behaviour of the players themselves. Human behaviour is not predictably rational. It is derived as much from cultural pre-dispositions, immediate judgements, trust, social norms, and pre-existing beliefs as it is from pure analytical reasoning. How do you account for irrationality in a game?
Humans also show individual learning behaviours, and the differences between how two players might learn whilst playing the game are equally difficult to integrate into a mechanism that favours a single player.
“You’d have to specify how players update their information during the course of the game according to what they see,” Professor Holden said.
Winning by reverse engineering
There is one question, however, that may prove impossible to account for in a game.
Let’s assume we can actually reverse-engineer a war-winning strategy. The certainty of success may prove irresistible for resolving geopolitical differences. What would China do if it was guaranteed victory in a war over Taiwan? In the absence of meaningful deterrence, war could become more likely. So underneath all this there lies a far deeper question: can we reverse-engineer morality?
In reality, we will probably never find out. The DARPA program never got off the ground.
“DARPA periodically issues RFIs in areas of interest to receive input from the broader science and technology community,” a spokesperson said to ADM. “Sometimes RFIs lead to DARPA programs, and sometimes they do not. In this case it did not.”
Perhaps it is all too hard after all.
This article first appeared in the July 2019 edition of ADM.