The government’s decision to join a US-led coalition protecting shipping from Iranian interference has prompted some measured concern.
On one hand, the ADF deployment is largely symbolic. A single P8-A Poseidon for 30 days, eventually followed by a frigate and a logistics team under Operation Manitou, is essentially a political flag-lending exercise involving assets that may have been headed to the Middle East anyway.
On the other hand, a number of commentators argue that the deployment distracts from more pressing issues in the Indo-Pacific, undermines the credibility of the rules-based order, risks another military quagmire, highlights our poor fuel supplies and endorses the US ‘maximum pressure’ campaign on Iran that so far seems to have created more problems than it has solved.
Yet perhaps there is more we should be concerned about. If things go south, Iran could plausibly start winning from the opening salvo of battle.
To understand how, we need to start by questioning a common assumption - whether knowledge is actually power. In reality, humans use far less information to make decisions than we might think and even make better decisions as a result.
The US military once learnt this the hard way. In 2002, the Pentagon ran a massive war-game called ‘Millennium Challenge’ that almost exactly simulated what is now panning out in the Persian Gulf - the deployment of naval assets to thwart a rogue Middle Eastern state. The friendly Blue Team were provided with ‘dominant battlespace knowledge’ from an array of sensors and analytics, whilst the opposing Red Team were modelled on the Iranian military.
Yet in the game’s opening ten minutes, the Red Team, led by Vietnam veteran Lieutenant General Paul Van Riper, overwhelmed the US Navy’s Aegis systems with swarms of explosive-laden speedboats and missiles and sank 19 ships, including an aircraft carrier, with a simulated loss of 20,000 American lives.
Millennium Challenge was made famous by Malcolm Gladwell, who argued that the Blue Team’s attempt to lift the fog of war overwhelmed it with information and fatally slowed its own ability to make decisions.
Van Riper, meanwhile, put himself ‘in command but out of control’. He limited the information available to himself and his subordinates to enable what Gladwell called ‘rapid cognition’. Van Riper’s method was not the only contributor to the game’s outcome, but he nonetheless exposed the strategic peril of overwhelming decision-makers with too much tactical knowledge.
The same peril persists today. As John Blackburn and Ian McDonald have written for ADM, modern Western military forces incorporate “sensor proliferation across all imaginable spectra [and] exponential growth in data generation.” The challenge, as the ADF sees it, is to find better ways to get this information to where it needs to go.
Is this the latest version of the ‘dominant battlespace knowledge’ the Blue Team was supposed to enjoy? Could Iran take a leaf out of Van Riper’s book and sink the USS Abraham Lincoln?
Of course, the quality of battlespace information has improved greatly since 2002. This partly offsets the negative effects of quantity, but research shows that even too much high-quality information causes a decrease in decision effectiveness. In addition, when faced with uncertainty and contrasting outcomes, people aren’t good at determining whether information is high-quality and tend to value irrelevant information. And war is about as uncertain as it gets.
It is not difficult to imagine Iran’s leaders creating and then exploiting uncertainty. The commanders of the Revolutionary Guard Corps have decades of combat experience across the Middle East and are likely to play their limited cards as well as possible.
So if things do go south, RAN and the US Navy may find that all the data their new platforms generate is actually a vulnerability. If Iran is able to overwhelm decision-makers with information using low-cost speedboats and missiles, as Van Riper did when given the same assets, it could plausibly win the opening salvos of a war.
This is not to say that knowledge is actually a disadvantage. The benefits of getting the right amount of information to the right person at the right time are obvious. But the key phrase there is the ‘right amount’: too much information, no matter how high-quality, becomes a problem that others can exploit. It seems counter-intuitive, but perhaps we need to limit the movement of information to maintain decision superiority.
In Gladwell’s book, Van Riper likens the battlespace to a chess board: even though you can see everything, victory still isn’t guaranteed. The US may be risking yet another war in the Middle East because Washington is confident it has the ‘dominant battlespace knowledge’ it needs to win. But is knowledge really power? Or is judgement?