• Defence
    Defence
Close×

As the role of robotic and autonomous systems becomes more important within the ADF, building trust in autonomous systems is critical to the technology’s uptake.

Army released its Robotic and Autonomous systems strategy in 2018, RAAF has sought to accelerate the use of AI and autonomy through Plan Jericho and the Navy is pursuing multiple acquisition programs that rely on autonomy.

Defence scientist Robert Hunjet said the promise of autonomous systems has been discussed for decades.

“Teleoperation - the concept of a remote-controlled vehicle, drone or tank - is not representative of autonomy as the vehicle has no ability to make its own decisions or task itself,” Dr Hunjet said.

In order to make these machines truly smart enough, Defence is undertaking research in areas including contextual awareness, active perception, path planning, multi-agent system control and swarming.

Improving a robot’s ability to work intelligently requires more than investment in machine learning. It is also about enabling systems to work together.

“With recent advances in drone technology, the concept of swarming has attracted a lot of interest,” Dr Hunjet said.  “We observe swarming in nature, for example in the way birds flock and fish school. The individuals interact only with others in close proximity and the cascading effect of these local interactions provides a global behaviour through the swarm as a whole.

“Within robotics, we can emulate the creation of global behaviours in a similar fashion through local interactions with neighbouring systems, offering a potential scalable approach to generate mass with minimal computational complexity or communications overheads.”

In order to be considered robust, autonomous systems must be able to operate in difficult or contested situations. Algorithms must be stable in the face of unexpected system inputs.

Defence is also investigating approaches that would allow robotic systems to share their position and orientation information with others that would then fuse these estimates with their own data, enabling enhanced positioning accuracy.

Dr Hunjet said building trust in autonomous systems was critical to the technology’s uptake.

“Interaction between entities no doubt plays a large part in human trust. As such, the interface between a human operator and a machine should be designed to assist the human and reduce cognitive load,” he said.

Research is aiming to address how AI might be able to explain its decisions to a human operator in a manner that takes into account the operator’s state. That is, the machine would seek to provide an appropriate level of detail based on its understanding of the operator’s current cognitive load.

Trust is also gained through observation of repeated good performance. To ensure its technology works effectively and as expected, Defence is conducting research into verifiable autonomy.

The concept of verification from the perspective of test and evaluation is also something to consider. With many AI-based systems being specifically designed to learn and evolve, they do not necessarily behave in the same manner when presented with the identical inputs, such as sensor information. In such systems, traditional regression-based approaches to testing are not appropriate.

Future testing processes may need to be more akin to the issuance of a driver’s licence, where a curriculum is designed and competency assessed, allowing for future improvement while performing a task. This concept is known as machine education.

Defence funds collaboration with Australian academic institutions and international partner organisations through its trusted autonomous systems strategic research initiative.

comments powered by Disqus