Intervening in the 'rise' of the lethal autonomous weapon

Researchers in AI and robotics have called for a ban on lethal autonomous weapons, loosely termed 'killer robots'. Prof Lucy Suchman, Lancaster University, explains why she added her name to the call.

ResponsibleSci blog, 17 September 2015
 

Amidst endless screen shots from Terminator 3: Rise of the Machines1 and seemingly obligatory invocations of Stephen Hawking, Elon Musk and Steve Wozniak as celebrity signatories, the media reported the release on 28 July of an open letter signed by thousands of robotics and Artificial Intelligence (AI) researchers calling for a ban on lethal autonomous weapons.

Despite the media focus, the inclusion of celebrity signatories is less significant than the number of prominent computer scientists (not a group prone to political calls to action) who have been moved to endorse the letter.

Their concerns turn not on the prospect of a Terminator-style humanoid or 'sentient' bot, but on the more mundane progression of increasing automation in military weapon systems: in this case, the automation of the process of identifying particular categories of humans as legitimate targets for killing (for examples those in a designated area, or who fit a specified and machine-readable profile).

The developments cited in the letter are specific, imminent and, despite their lack of filmic qualities, loaded with ethical issues that the signatories (myself included) believe have not been fully addressed.

Noel Sharkey, roboticist and Chair of the International Committee for Robot Arms Control, points out2 that rather than assuming humanoid form, lethal autonomous weapons are much more likely to look like already-existing weapons systems, including tanks, battle ships and jet fighters. The core demand of the letter is mirrored in that of the Campaign to Stop Killer Robots (launched in 2013 by a coalition of NGOs): an international ban that would pre-empt the delegation of 'decisions to kill' to machines.

Crucially, delegating the 'decision to kill' to machines means removing the element of human deliberation typically associated with the word 'decision'. In its place is the specification of computationally tractable algorithms for the identification of a legitimate target. Such a target (under the Rules of Engagement, International Humanitarian Law and the Geneva Conventions) is an opponent that is engaged in combat and poses an 'imminent threat'. Even aside from questions regarding the legitimacy of targeting protocols, there is ample evidence for the increasing uncertainties involved in differentiating combatants from non-combatants under contemporary conditions of war fighting.

The premise that legitimate target identification could be rendered sufficiently unambiguous to be automated reliably is at this point unfounded (apart from certain nonhuman targets like incoming missiles with very specific 'signatures').

Reliable decision-making regarding targets is one issue. Another is the challenge of assigning moral and legal responsibility — a process on which the existing regulatory apparatus that comprises the laws of war relies fundamentally.

However partial and fragile its reach, this regulatory regime is our best current hope for articulating limits on killing. Specifically, the locus for a ban on lethal autonomous weapons lies in the United Nations Convention on Certain Conventional Weapons (CCW), the body created "to ban or restrict the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately."

Achieving such a legally binding international agreement is a huge task, but there is some progress. Since the launch of the campaign in 2013, the CCW has put the debate on lethal autonomous weapons onto its agenda and held two international 'expert' consultations. At the end of 2015, the CCW will consider whether to continue discussions, or to move forwards on the negotiation of an international treaty.

However controversial, remotely-operated drones comprise a different class of weapons from those to which the letter is addressed. In an interview with BBC World News, Prof Heather Roff, who helped to draft the open letter, pointed to the fact that the targets for current drone operations are 'vetted and checked', in the case of the US military by a Judge Advocate General (JAG). Whatever questions we might have regarding the criteria for drone targeting, Roff emphasized that what matters in the context of the letter is that "there is a human being actually making that decision, and there is a locus of responsibility and accountability that we can place on that human."

In the case of lethal autonomous weapons, she argues, human control is lacking "in any meaningful sense."

The question of 'meaningful human control'3 has become central to debates about lethal autonomous weapons. Roff is now beginning a project4 to develop the concept more fully. The project will bring together computer scientists, roboticists, ethicists, lawyers, diplomats and others to create a dataset "of existing and emerging semi-autonomous weapons [and] to examine how autonomous functions are already being deployed and how human control is maintained."

To appreciate the urgency of the need for interventions into the development of lethal autonomous weapons, it is helpful to consider the concept of 'irreversibility,' which acknowledges the increasing difficulty of undoing technological projects over time.5 Investments (both financial and political) increase and associated infrastructures (both material and social) become embedded; as a result, the efforts required to dismantle established systems grow commensurately.

As part of a broader concern to interrupt the intensification of automated killing, Jutta Weber and I have recently attempted to set out a conception of what we call human-machine autonomies. We argue that precisely because our capacities for action as humans are increasingly inseparable from our technologies, there is an urgent need to reinstate human deliberation at the heart of matters of life and death. 
 

Lucy Suchman is Professor of Anthropology of Science and Technology at Lancaster University and President-elect of the Society for Social Studies of Science (4S).  Previously she was a Principal Scientist at Xerox’s Palo Alto Research Center, where she was a founding member of Computer Professionals for Social Responsibility in the 1980s. Her current research extends her longstanding engagement with the field of human-computer interaction to the domain of contemporary war fighting.
 

Notes

1. Warner Bros Pictures, 2003

2. As reported on CNET, 27 July 2015

3. As formulated by NGO Article 36 and embraced by United Nations special rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns

4. In collaboration with Article 36 and funded by the Future of Life Institute

5. See for example: Callon M (1990) Techno-economic networks and irreversibility. The Sociological Review, 38: 132–161.

Filed under: