• ‘Spot’, a quadruped prototype robot, walks down a hill during a demonstration at Marine Corps Base Quantico.
US DoD
    ‘Spot’, a quadruped prototype robot, walks down a hill during a demonstration at Marine Corps Base Quantico. US DoD
Close×

A who’s who of CEOs, engineers and scientists from the technology industry have signed a global pledge – including Google DeepMind and Elon Musk and – to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons.”

Released in Stockholm at the 2018 International Joint Conference on Artificial Intelligence (IJCAI), the pledge was co-organised by UNSW’s Toby Walsh and signed by 150 companies and more than 2,400 individuals from 90 countries working in artificial intelligence (AI) and robotics. 

Corporate signatories include Google DeepMind, University College London, the XPRIZE Foundation, ClearPath Robotics/OTTO Motors, the European Association for AI, and the Swedish AI Society. Individuals include head of research at Google.ai Jeff Dean, AI pioneers Stuart Russell, Yoshua Bengio, Anca Dragan and Toby Walsh, and British Labour MP Alex Sobel.

The pledge, organised by the Future of Life Institute, challenges governments, academia and industry to follow their lead, saying: “We, the undersigned, call upon governments and government leaders to create a future with strong international norms, regulations and laws against lethal autonomous weapons… We ask that technology companies and organisations, as well as leaders, policymakers, and other individuals, join us in this pledge.”

 “I’m excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect,” Max Tegmark, president of the Future of Life Institute, said. “AI has huge potential to help the world – if we stigmatise and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilising as bioweapons, and should be dealt with in the same way.”

Lethal autonomous weapons systems (LAWS) are weapons that can identify, target, and kill a person, without a human ‘in-the-loop’. This does not include today’s drones, which are under human control, nor autonomous systems that merely defend against other weapons.

“We cannot hand over the decision as to who lives and who dies to machines. They do not have the ethics to do so. I encourage you and your organizations to pledge to ensure that war does not become more terrible in this way,” organiser Toby Walsh, a professor of artificial intelligence at UNSW, said.

“Clearpath continues to believe that the proliferation of lethal autonomous weapon systems remains a clear and present danger to the citizens of every country in the world,” Ryan Gariepy, Founder and CTO of both Clearpath Robotics and OTTO Motors, said. “No nation will be safe, no matter how powerful.” 

In addition to the troubling ethical questions surrounding lethal autonomous weapons, many advocates of an international ban on LAWS are concerned that they will be difficult to control – easier to hack, more likely to end up on the black market, and easier for terrorists and despots to obtain –  which could become destabilising for all.

In December 2016, the United Nations’ Review Conference of the Convention on Conventional Weapons (CCW) began formal discussion on LAWS. 26 countries attending the Conference have so far announced support for some type of ban, including China. 

Such a ban is not without precedent: biological weapons and chemical weapons were also banned, not only for ethical and humanitarian reasons, but also for the destabilising threat they posed. 

The next UN meeting on LAWS will be held in August 2018, and signatories of the pledge hope their pledge will encourage lawmakers to develop a commitment to an international agreement between countries. 

comments powered by Disqus