• Defence Science and Technology Group have released a new report into ethical AI.
Getty Images
    Defence Science and Technology Group have released a new report into ethical AI. Getty Images
Close×

Based on a workshop held in August 2019, Defence Science and Technology Group have released a new report into ethical AI uses in a Defence context.

Gathering thought leaders from Defence, academia, industry, government agencies and the media, the format allowed for a blend of lectures, tutorials and break out brainstorming sessions along a number of themes.

20 topics emerged from the workshop including: education command, effectiveness, integration, transparency, human factors, scope, confidence, resilience, sovereign capability, safety, supply chain, test and evaluation, misuse and risks, authority pathway, data subjects, protected symbols and surrender, de-escalation, explainability and accountability.

These topics were categorised into five facets of ethical AI:

  1.      Responsibility – who is responsible for AI?
  2.      Governance – how is AI controlled?
  3.      Trust – how can AI be trusted?
  4.      Law – how can AI be used lawfully?
  5.      Traceability – How are the actions of AI recorded?

The technical report entitled A Method for Ethical AI in Defence summarises the discussions from the workshop, and outlines a pragmatic ethical methodology to enhance further communication between software engineers, integrators and operators during the development and operation of AI projects in Defence.

Chief Defence Scientist, Professor Tanya Monro, said AI technologies offer many benefits such as saving lives by removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness.

“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” Professor Monro said.

The significant potential of AI technologies and autonomous systems is being explored through the Science, Technology and Research (STaR) Shots from the More, together: Defence Science and Technology Strategy 2030 as well as meeting the needs of the updated National Security Science & Technology Priorities.

“Defence research incorporating AI and human-autonomy teaming continues to drive innovation, such as work on the Allied IMPACT (AIM) Command and Control (C2) System demonstrated at Autonomous Warrior 2018 and the establishment of the Trusted Autonomous Systems Defence CRC (TASCRC).”

A further outcome of the workshop was the development of a practical methodology that could support AI project managers and teams to manage ethical risks. This methodology includes three tools: an Ethical AI for Defence Checklist, Ethical AI Risk Matrix and a Legal and Ethical Assurance Program Plan (LEAPP).

ADM Comment: I was involved in the initial consultation/brainstorming day at the ANU Shine Dome as part of this program. I was impressed with the level of engagement from the TASCRC and Plan Jericho teams with the delegates.

In particular, I was part of a syndicate of thinkers looking at the role of big data and AI in a health monitoring/logistics context. The hypotheticals were mind bending. For example, a deployed solider becomes pregnant on operations; the smart health monitoring system knows before they do. Who does the system tell and when? All the flow on effects are enormous and this is but one example.

The follow up and engagement facilitated by Kate Devitt and her team was is an excellent model for collaboration and innovation that could be applied in other areas of Defence thinking. An excellent report and absorbing topic for those in our community.

comments powered by Disqus