top of page

 

It is important that the artificial intelligence community recognises that many of our technological achievements are now being directed towards the development of fully autonomous weapons, also known as lethal autonomous weapons systems, fully autonomous weapons, lethal autonomous robots and killer robots. These are weapons systems, controlled by computer programs, that once activated will select targets and attack them with violent force. 

 

We are not discussing autonomous robots in general here, but only the use of unsupervised autonomous targeting and attack. This is a timely symposium for AISB 50 as the issues concerning these weapons have been dramatically highlighted by the international community in 2013.

 

Some states already use a number of automated weapon systems that intercept high-speed inanimate objects such as incoming missiles, artillery shells, mortar grenades or saturation attacks on their own. Examples include C-RAM, Phalanx, NBS Mantis and Iron Dome. These systems complete their detection, evaluation and response process within a matter of seconds and thus render it extremely difficult for human operators to exercise meaningful supervisory control other than to switch them on and off. So far, such systems have been deployed in relatively uncluttered environments, devoid of civilians.

 

But there is an ever-increasing push by several states to develop distance weapons that could move outside the reach of human supervisory control. The US has conducted advanced testing on a number of autonomous weapons platforms such as X-47b – a fast subsonic autonomous jet that can now take off and land on aircraft carriers, the Crusher – a 7 ton autonomous ground robot, and an autonomous hunting submarine. The Chinese are working on the Anjain supersonic autonomous air-to-air combat vehicle. The Russians are developing an autonomous Skat jet fighter. Israel has the autonomous Guardium ground robot and the UK is in advanced testing of the Mantis – a fully autonomous intercontinental combat aircraft.

 

The main concerns about these fully autonomous weapons include (i) crossing a fundamental moral line to allow the decision to kill humans to be delegated to machines, (ii) no guarantee of the fitness of technologies to predictably comply with international law, (iii) disruption of international security and mass proliferation. 

 

In April 2013, a global coalition, led by the International Committee for Robot Arms Control, Human Rights Watch, Nobel Women's Initiative, Pugwash, IV Pax Christi, Article 36 and Landmine Action Canada, WILPF and AAR Japan, launched  the Campaign to Stop Killer Robots. The campaign issued a call for a pre-emptive ban on the development, production, and use of fully autonomous weapons. It is growing rapidly and now consists of 50 Non Governmental Organisations from 23 countries.

 

The campaign's issues are being taken seriously by many states. At the UN, 44 states have now spoken out about the issue and Ban Ki-Moon, the secretary general of the UN, has issued a statement about killer robots. In the UK, the birthplace of AISB, there have been debates about autonomous weapons systems in both the House of Commons and the House of Lords.

 

In May this year, Christof Heyns, Special Rapporteur for the UN, reported to the Human Rights Council citing a wide range of objections to fully autonomous weapons and called for a world-wide moratorium on lethal autonomous robots. The International Committee of the Red Cross is currently holding consultations on the issues. And in November, the Convention on Certain Convential Weapons (117 nations) passed a mandate without objection to begin expert consultations in May 2014.

 

The International Campaign for Robot Arms Control issued a statement in October signed by 270 computing experts calling for a ban on killer robots

 

It is important that there are discussions within the technical community. The focus of this symposium will be on technical, ethical, legal and policy concerns about the application of armed robots in modern conflicts. The main questions to be addressed include:

 

• should machines be delegated with the decision to kill?

 

• can computer systems comply with International Humanitarian Law?

 

• will automating the kill decision ultimately lead to the automation of warfare?

 

• should  there be a legally binding international prohibition treaty?

bottom of page