Powered by MOMENTUM MEDIA
australian aviation logo

Ethics workshop urges military recruits to learn AI skills

written by Staff reporter | February 17, 2021

The Loyal Wingman prototype
The Boeing Loyal Wingman prototype when it was unveiled (Boeing)

A key workshop involving more than 100 members from Defence has advised that military personnel need to be taught about artificial intelligence to avoid ethical dilemmas.

The findings were released in a paper this week called A Method for Ethical AI in Defence that summarises the workshop and aims to create a framework for tackling future issues raised by AI technology.

Air Vice-Marshal Cath Roberts, head of air force capability said artificial intelligence and human-machine teaming will play a key role for air and space power into the future.

“We need to ensure that ethical and legal issues are resolved at the same pace that the technology is developed. This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control,” AVM Roberts said.

“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way.”

==
==

Significantly, the report urges that defence personnel are taught about AI in the same way as they are taught key human leadership skills.

“Workshop participants considered the importance of education in the role of command,” reads the report. “They felt that when Defence teaches leadership and management to military officers, they teach aspects of human behaviour, cognition, and social factors.

“Thus, for a human to lead and/or manage an AI, they will need to understand the AI. Without understanding AI, the human will be uncomfortable, and the relationship will break down quickly.

“It is very likely that at least some aspects of AI will be embedded in every defence function and capability.

“Without early AI education to military personnel, they will likely fail to manage, lead, or interface with AI that they cannot understand and therefore, cannot trust.”

The event took place in Canberra from 30 July to 1 August 2019 with 104 people from 45 organisations in attendance, including representatives from Defence, Australian government agencies, the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), civil society, universities and Defence industry.

It was designed to “elicit evidence-based hypotheses regarding ethical AI from a diverse range of perspectives and contexts and produce pragmatic methods to manage ethical risks on AI projects in Defence”.

The 20 topics that emerged were then broken down into five “facets” of considering ethics in AI:

  • Responsibility – who is responsible for AI?
  • Governance – how is AI controlled?
  • Trust – how can AI be trusted?
  • Law – how can AI be used lawfully?
  • Traceability – How are the actions of AI recorded?

Defence said in a statement, “The ethics of AI and autonomous systems is an ongoing priority and Defence is committed to developing, communicating, applying and evolving ethical AI frameworks. Australia is proactive in undertaking Article 36 legal reviews on new weapons, means and methods of warfare.”

You need to be a member to post comments. Become a member today!

You don't have credit card details available. You will be redirected to update payment method page. Click OK to continue.