by Elizabeth Minor, Researcher at Article 36
Later this month, governments will meet in Geneva to discuss lethal autonomous weapons systems. Previous talks – and growing pressure from civil society – have not yet galvanised governments into action. Meanwhile the development of these so-called “killer robots” is already being considered in military roadmaps. Their prohibition is therefore an increasingly urgent task.
From 13-17 April, governments will meet at the United Nations in Geneva to discuss autonomous weapons – also referred to as killer robots. The week-long meeting will be the second round of multilateral expert discussions on “lethal autonomous weapons systems” to take place within the framework of the United Nations’ Convention on Certain Conventional Weapons (CCW).
Urgent and coordinated international action is needed to prevent the development and use of fully autonomous weapons systems. Such systems would fundamentally challenge the relationship between human beings and the application of violent force, whether in armed conflict or in domestic law enforcement. Once activated and their mission defined, these systems would be able to select targets and carry out attacks on people or objects, without meaningful human control. As states with high-tech militaries such as China, Israel, Russia, South Korea, the UK, and the US continue to invest in aspects of increased autonomy in weapons systems technologies, consideration of this issue is increasingly urgent. Campaigners are calling on states to tackle this issue by developing a treaty that pre-emptively bans these weapons systems before they are put into operation, by which time it may be too late.
Weapons systems that do not permit the exercise of meaningful human control over individual attacks should be prohibited, due to the insurmountable ethical, humanitarian and legal concerns they raise. The governance of the use of force and the protection of individuals in conflict require control over the use of weapons and accountability and responsibility for their consequences. This principle, rather than any particular piece of technology or format of weapons delivery, is at the heart of the issue of autonomous weapons systems. Some have argued that fully autonomous weapons systems might reduce the risk of conflict or be able to better protect civilians. However, the focus must remain on these systems’ overall implications for the conduct of violence, rather than on a small range of hypothetical possibilities.
Tasks can be given to hardware and software systems. Responsibility for violence cannot. The process of rendering the world ‘machine-sensible’ reduces people to objects. This is an affront to human dignity. Computerised target-object matching such as shape detection, thermal imaging and radiation detection may enable the identification of objects such as military vehicles, though in complex and civilian-populated environments, not necessarily with accuracy. However, assessment of information about these objects and the surrounding environment, including the presence of protected persons such as civilians or wounded combatants, is also essential to uphold the principles that govern the launching of individual attacks under International Humanitarian Law. These are not quantitative rules, but considerations that require deliberative moral reasoning and contextual decision-making. As such, they could not be translated into software code. Based on the principle of humanity, they implicitly require human judgement and control over the process of decision-making in individual attacks.
Other concerns about the development of fully autonomous weapons systems include the dangers of proliferation among state and non-state actors, hacking, and the use of these systems in law enforcement or other situations outside of warfare.
A preemptive ban as a solution
Whilst the Campaign to Stop Killer Robots is calling on states to move with urgency towards negotiations on a treaty to outlaw fully autonomous weapons systems, previous talks in Geneva have not yet galvanised governments into action.
Some states have suggested that existing law is sufficient to tackle this issue. Existing international law, which was developed prior to any consideration of autonomous weapons systems, implicitly assumes that the application of force is governed by humans. This body of international law is now inadequate as a reliable barrier to the development and use of fully autonomous weapons systems. A pre-emptive ban through an international instrument would not only halt any progress on these systems amongst states parties, but would help to stigmatise development by others.
Some states have argued that this issue can be dealt with by conducting individual reviews of their weapons technologies to ensure they continue to uphold current international law. States are already obligated to do this however, and whilst it is important, it will not be sufficient in preventing the development of these systems internationally. A clear legal standard and norm needs to be set, and this is best done through new international treaty law.
A ban based around prohibiting systems that operate without meaningful human control over individual attacks should be the starting point in international discussions among states, and so the elaboration and agreement of the elements of this principle are required as a next step.
International response so far
To date, autonomous weapons have been raised at the Human Rights Council in 2013 and considered by governments in dedicated discussions held at expert meetings of the CCW in 2014. The UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, called in 2013 for national moratoria to be imposed by all states on the “testing, production, assembly, transfer, acquisition, deployment and use” of these systems, until an internationally agreed framework on their future has been established. The CCW could be a possible venue for developing this, having previously produced a pre-emptive ban on blinding laser weapons. One should note, though, that previous attempts within the CCW to deliver the responses needed to certain weapons systems have occasionally failed, often hampered by operating under the consensus rule and a tendency to defer to military considerations rather than focus on humanitarian or ethical imperatives.
Promisingly, the need to ensure meaningful human control has already been a prominent feature of the debate at the CCW, with several states recognising the importance of this approach. In upcoming discussions, governments should elaborate their policies for maintaining meaningful human control over existing weapons systems in individual attacks. Such an exchange would advance consideration of how human control can be ensured over future systems. This would in turn help clarify what practices and potential systems must be prohibited and the standards that states must demonstrate that they are meeting in their conduct. Elements to consider could include the need for adequate information to be available to commanders using any weapons system, positive action from a human being in launching individual attacks, and ensuring accountability.
Few states have elaborated any policy on human control over weapons systems. Current US policy on autonomous weapons systems stresses that there should be “appropriate levels of human judgement over the use of force”, but does not define what these should be. The policy leaves the door open for the development of fully autonomous weapons systems, whilst recognising the harm they could cause to civilians. The UK government has stated that it has no intention to develop fully autonomous weapons and that “human control” over any weapons system must be ensured. However, it has not given sufficient elaboration of what exactly this means and how it will be ensured.
States may see different types of operating, supervising or overseeing systems to constitute acceptable control. Agreement between states on the concept of meaningful human control is therefore an important element of international progress on the issue of fully autonomous weapons systems.
Work by states on an international framework should be supported by input from civil society and draw on the views of a range of experts. Ultimately, negotiation processes will determine the definitions of key concepts. If discussions do not advance towards a binding framework within the CCW, a freestanding treaty process may be required, as was the case previously in the processes to outlaw both anti-personnel landmines and cluster munitions.
The upcoming meeting of experts at the CCW in April is unlikely to result in particular concrete actions due to the nature and format of the meeting. It could pave the way for a decision in November that states continue to discuss this issue in 2016 and put it on the agenda for the CCW’s 2016 Review Conference. At that point it could be flagged as a subject on which States Parties should develop a new binding protocol. No clear group to lead this process has yet emerged. So far Cuba, Ecuador, Egypt, the Holy See, and Pakistan have endorsed a pre-emptive ban on autonomous weapons systems. France secured consensus for the CCW mandate in 2013 that established its work on lethal autonomous weapons systems, and Germany will be chairing the upcoming meeting, with the aim of seeking consensus on further consideration of the subject. However, the development of fully autonomous weapons systems is already being considered in military roadmaps. This makes their prohibition an urgent task.
Featured image: The UK’s Taranis stealth UAV. The Taranis exemplifies the move toward increased autonomy as it aims to strike distant targets “even on other continents”, although humans are currently expected to remain in the loop. Source: Flickr | QinetiQ