This article is part of the Remote Control Warfare series, a collaboration with Remote Control, a project of the Network for Social Change hosted by Oxford Research Group.
States’ ability to move forward on the issue of lethal autonomous weapons will depend on not only finding consensus on key concepts but also having the will to find concrete outcomes.
April’s meeting of experts at the UN on lethal autonomous weapons systems (often shortened to LAWS or AWS) set out to consider questions relating to this emerging military technology, a continuation of UN talks begun in May 2014. These meetings took place under the aegis of the Convention on Certain Conventional Weapons (CCW), and brought together state representatives, NGOs and academics. The CCW meetings have demonstrated a divergence of views on the ethical and legal concepts that should be employed, and a complex debate that at times felt detached from reality; moreover, without a negotiating mandate there is a fear that the meetings could simply mire the issue in abstract debate, leaving states free to continue developing the technology in the meantime.
The UN Convention on Certain Conventional Weapons
For a long time the CCW was a neglected treaty; regarded by states and NGOs as an overambitious and failed attempt to combine elements of international humanitarian law with arms control. By the end of the 1980’s, the CCW appeared to be floundering with only 29 state parties. Yet in recent years, participation has increased and there are now 121 state parties to the convention. A total of 87 countries sent representatives to the first meeting on autonomous weapons, marking a record high level of participation for the CCW. Eighty-eight countries were present at April’s meeting.
The purpose of the CCW is explicitly “to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.” The CCW is an evolving body of international humanitarian law, with a framework that is dynamically structured to be responsive to the concerns raised by the international community. The recognition that the law is not static is therefore a particular strength, indeed a cornerstone of the CCW.
The CCW’s talks in May 2014 and April 2015 were undertaken with a mandate “to discuss the questions related to emerging technologies in the area of lethal autonomous weapons systems, in the context of the objectives and purposes of the convention.” A ban on autonomous weapons would join five other CCW protocols on non-detectable fragments, landmines, incendiary weapons, laser weapons and explosive remnants of war. The uptake of the issue of lethal autonomous weapons by the CCW has been unprecedented in its speed, and could indicate a move towards prohibition. However, because there is no negotiating mandate, it could also be a strategic move to engage in these discussions on the part of states keen to engage in the debate of abstract principles, while at the same time continuing to develop the technology. The annual meeting of the CCW in November 2015 will decide formally whether to continue the talks, based largely on the content of the April meeting.
The most contentious issue discussed so far is the issue autonomous weapons pose with regard to human control. This issue was discussed through reference to the contested concepts of ‘autonomy’ and ‘meaningful human control’. The United States, the UK, France and Germany are all in favor of the notion of autonomy as a guiding principle. The US was one of the first states to advocate for this concept when its Department of Defense issued the first policy announcement by any country on autonomous weapons systems in November 2012, just three days after Human Rights Watch had brought the issue into the global spotlight. Interestingly, the directive refers not to ‘fully autonomous’, but to ‘autonomous weapon systems’ that include human supervision. This supports the view advocated by the US at April’s CCW meeting: as long as humans are ‘in the loop’, weapons systems are not fully autonomous and therefore compliant with international humanitarian law. The UK, France and Germany attach human involvement to autonomous weapons systems as well. In a general exchange of views the UK representative assured, “there will be human oversight in this new territory where lethal autonomous weapons systems can go […] Autonomous systems do not exist, and will never exist” (author’s own transcription).
However, some objected that this was a ‘knockdown’ argument intended to rhetorically shut down the controversy about the lack of human control. The International Committee for Robot Arms Control (ICRAC) is an international association of experts that was sat up with the specific goal of getting governments to talk to each other about the continuous automation of warfare, and was present at both the 2014 and 2015 meetings. ICRAC’s interpretation of the DoD policy was that it was designed to “green-light” weapon systems able to select and engage human targets. Together with 272 experts in computer science, engineering, artificial intelligence, robotics, and related disciplines from 37 countries, ICRAC issued ‘The Scientists’ Call’, stating: “[G]iven the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines”. Their message was clear, that within such systems, human control would not be ‘meaningful’.
Meaningful human control
India and Pakistan expressed confusion over this idea of meaningful human control, observing that the presence of meaningful human control would mean the weapons systems would not then be ‘autonomous’. In their opinion the question should be whether or not independent weapon systems can comply with international humanitarian law: whether they can distinguish between civilians and combatants, make proportionality assessments, and comply with other time-tested legal principles. A counter-argument was raised by Richard Moyes from Article 36 that if discussion is too focused on undefined hypothetical systems’ ability to comply with international humanitarian law, then legal arguments could become separated from reality. In particular, he argued that the law is a human framework applied to humans. A state representative from Greece agreed, saying that autonomous weapons should be addressed ethically rather than legally or technically, as the question is whether or not humans should delegate life and death decisions to a machine. The debate around autonomous weapons’ ability to comply with international humanitarian law is a misguided one if it fails to grapple with the bigger, underlying issues that would be raised. Banning such systems, in fact, is about maintaining something unique in the decision-making process: a human with intent behind the act of killing. Cuba, Ecuador, Pakistan, Sri Lanka and Palestine agreed with this argument and called for a prohibition.
Potential for convergence
Consensus was reached on the undesirability of fully autonomous weapons systems. Ambassador Michael Biontino of Germany, who chaired the April meeting, wrote in his report that the following area of common understanding had emerged: “machines or systems tasked with making fully autonomous decisions on life and death without any human intervention, were they to be developed, would be in breach of international humanitarian law, unethical and to possibly even pose a risk to humanity itself.” However, because parties largely disagree about what constitutes human intervention, this statement is of limited value. The contradictory definitions used at the CCW meetings have created a lack of clarity for policymakers; it remains largely undecided what the world would look like if autonomous weapons came into existence.
The April talks not only give some idea of the shape of the debate going forward, but also of the potential limitations of the CCW talks themselves, as a forum for discussion, but without a negotiating mandate. One significant milestone would be the establishment of a broad, representative and universal Group of Governmental Experts (GGE) next year that would move the discussion from an informal to a formal setting. It has been suggested that the current lack of common language makes this discussion challenging, and that it is critical to avoid rushing into formal discussions. However, it does not seem premature for prohibition to be on the agenda in a body that has been designed to create prohibitions. A GGE seems a necessary next step to keep states focused on a practical outcome.
Lene Grimstad served as an observer at the 2014 and 2015 Geneva Meetings of Experts on Lethal Autonomous Weapons Systems, and holds a MA in Society, Science and Technology in Europe from the University of Oslo and ESST (European Inter-University Association on Society, Science & Technology) .
Featured Image: Meeting of Experts on Lethal Autonomous Weapons Systems in April 2015. Source: Flickr | UN Geneva