Ethical technology - we can but should we?

Comments Comments

Last month, the Plan Jericho team in partnership with DST Group and the Trusted Autonomous Systems CRC hosted a workshop on Ethical AI. With an aim “to derive and analyse ethical principles relevant to Defence contexts for AI and Autonomous systems that will inform military leadership and ethics doctrine” the workshop brought together an interesting group of thinkers from a range of disciplines, both from Australia and overseas.

As the program notes, the rise of artificial intelligence and autonomous systems provides an opportunity to make Defence operations more effective, but also presents unique ethical and legal challenges to decision-making, such as how to ensure appropriate action and moral responsibility for decisions. In essence, technology is the easy part; but should we?

“Ethics in the development and, more importantly, in the deployment of new technologies is a challenge that must always be considered carefully,” Dr Tristan Perez, Leader Autonomy Assurance at the Trusted Autonomous Systems Defence CRC said to me. “This event provided a unique environment and opportunity to discuss these issues in relation to AI in defence contexts with world experts. The challenges faced by philosophers, ethicists, scientists, actuaries, lawyers, legislators, regulators, and developers are different in nature, and we saw elements of all these throughout the event.

“I wore my decision science hat to this event since my current work is not on the development of AI, but on the tools that will provide crucial information to stakeholders to inform their decision-making processes when it comes to autonomous systems and AI,” Dr Perez said. “My biggest take away learning was the importance that test & evaluation will play in creating defendable and robust decision processes for those who must make decisions about the use of AI.

“A colleague once described “Trust” as two elements; “Integrity” and “Competency.” The former is to be incorporated as part of the design; the latter as part of test & evaluation.”

And that is the key part of the AI conversation, the issue of trust. Do we trust the algorithms making decisions? Can we defend how the decision was made? Who is accountable for the decisions that AI makes in the heat of battle – the soldier that deployed the system or the developer behind the code?

While it is easy to look at the platform side of the house when it comes to the application of AI (killer robots seems to be the common mental shortcut), there are so many parts of the Defence organisation that would benefit from the use of ethically applied AI.

The event format was split between traditional lectures and group workshops where issues were explored. During one such session, a group I was part of looked at the role of AI in logistics, health and sustainment; not the sexiest part of the Defence landscape but vital ones nonetheless.

AI and big data are already being utilised in sustainment efforts. Vehicle, aircraft and ship borne Health and Usage Monitoring System (HUMS) have the ability to alert users and maintainers about issues that need to be addressed. Logistics tracking systems have been in use for many years, with varying levels of effectiveness. The issues here are relatively well understood.

Issues on the health front, however, are murkier. Would AI decide that an ADF member who has fallen pregnant be sent home from deployment without question; who in their chain of command should be told and when? Or what happens when an AI program decides who gets a medevac from the battlefield based on biometrically monitored soldier system with data linked to a medevac drone? Is the ADF comfortable with a computer potentially deciding who lives and dies? Where is the human in the loop? Can these rules be overruled or outright broken? None of these are easy questions and answers are variable depending on who you talk to and the individual circumstances of the situation.

The Trusted Autonomous Systems CRC is actively working this very puzzle with Dr Perez and his team designing testing processes for systems to demonstrate what trust looks like. DST has also been working in this space for some time. Dr Rob Hunjet of DST spoke at the release of the STEM strategy last month and outlined this hypothetical. Who has a driver’s licence? Have you ever had a traffic infringement for parking or speeding? Ever been in a car accident? And you still have your licence? You wouldn’t if you were an unmanned vehicle.

It is important to keep having these hard ethical conversations as technology evolves.

This article first appeared in the September 2019 edition of ADM.

comments powered by Disqus