Press "Enter" to skip to content

Should robots kill without human supervision?

Leslie 0
switchbladefleet
Courtesy: UCSD Coordinated Robotics Lab
Should robots be allowed to decide for themselves who they should kill, or keep alive, on the battlefield?

This is central to what experts from the Convention on Certain Conventional Weapons (CCW), which represents 117 countries including India, are debating at their four-day meeting on Lethal Autonomous Weapons Systems (LAWS) at the United Nations in Geneva that ends on 16 May.
Most humans have a sense of right and wrong, and if they do err in judgement, they can be punished by courts of their respective countries. But what would one do with autonomous robots unless states evolve some legislation?
This line of thinking is central to any discussion on autonomous robots even as the thought of having fewer humans being killed in war is attractive.
Not that this debate or the battefield are new to robots.
On 30 May, the United Nations Special Rapporteur on extrajudicial, summary or arbitrary executions, Christof Heyns, had called for a global pause in the development and deployment of lethal autonomous robots (LARs), to allow “serious and meaningful international engagement on this issue before we proceed to a world where machinesare given the power to kill humans”.
“While drones still have a ‘human in the loop’ who takes the decision to use lethal force, LARs have on-board computers that decide who should be targeted,” said Heyns in a 30 May media statement.
Jeffery S Thurhner, a faculty in the international law department at the US Naval War College, said in a December 2013 media statement issued by the National Defence University Press that from 2000–2010, the number of US unmanned aerial vehicles (UAVs) proliferated from fewer than 50 to over 7,000.
The US Phalanx system for Aegis-class cruisers, for instance, automatically detects, tracks and engages anti-air warfare threats such as anti-ship missiles and aircraft. The US Counter Rocket, Artillery and Mortar (C-RAM) system can automatically destroy incoming artillery, rockets and mortar rounds. Israel’s Harpy is a “Fire-and-Forget” autonomous weapon system designed to detect, attack and destroy radar emitters.
The UK Taranis jet-propelled combat drone prototype can autonomously search, identify and locate enemies but can only engage with a target when authorized by mission command. It can also defend itself against enemy aircraft.
The Northrop Grumman X-47B is a fighter-size drone prototype commissioned by the US Navy to demonstrate autonomous launch and landing capability on aircraft carriers and navigate autonomously. The Samsung Techwin surveillance and security guard robots, deployed in the demilitarized zone between North and South Korea, detect targets through infrared sensors. They are currently operated by humans but have an “automatic mode”.
Moreover, unmanned combat aerial vehicles (UCAVs), commonly known as drones, can be activated remotely by staying out of the line of fire. But LARs would mean that robots would decide their own targets.
This month (12 May), Kathleen Lawand, head of the the International Committee of the Red Cross’ (ICRC) Arms Unit, said new weapons with autonomous features should be subject to a thorough legal review to ensure they are capable of being used in accordance with international humanitarian law, something that countries are required to do for any new weapon.
She pointed out that major concerns persist over whether a fully autonomous weapon could make the complex, context-dependent judgements required by international humanitarian law. For instance, in the heat of battle, would an autonomous weapon be capable of distinguishing between a civilian and a combatant? Would it be capable of cancelling an attack that would have disproportionate incidental effects on civilians?
“This represents a monumental programming challenge that may well prove impossible to achieve,” said Lawand.
We can expect considerable improvements in the technology over the long term but IHL compliance with autonomous weapons systems cannot be guaranteed for the foreseeable future.
On 14 May, the International Committee for Robotics Arms Control (ICRAC) said in a statement that the unpredictability of autonomous weapons in unanticipated circumstances makes weapons reviews extremely difficult or even impossible, following which the “combined strengths of humans and computers operating together with the human in charge of targeting decisions makes better military sense and is necessary in order to meet the requirements of international law”.
Noel E Sharkey, professor of Artificial Intelligence and Robotics and Professor of Public Engagement in the Department of Computer Science at the University of Sheffield, UK, argued in 30 June, 2013, paper that we must continue to ensure that humans make the moral decisions and maintain direct control of lethal force instead of handing over these life and death decisions to armed robots.
Robots, he explained, lack three of the main components required to ensure compliance with the principle of distinction.
First, they do not have adequate sensory or vision processing systems for separating combatants from civilians, particularly in insurgent warfare, or for recognizing wounded or surrendering combatants. All that is available to robots are sensors such as cameras, infrared sensors, sonars, lasers, temperature sensors, and ladars, etc.
Second, a computer can compute any given procedure that can be written down in a programming language. The problem for the principle of distinction is that we do not have an adequate definition of a civilian that we can translate into computer code. Third, even if machines had adequate sensing mechanisms to detect the difference between civilians and uniform-wearing military, they would still be missing battle field awareness or common sense reasoning to assist in discrimination decisions.
Given these defiencies, Sharkey concluded that at this point we cannot rely on machines ever having the independent facility to operate on the principle of distinction as well as human soldiers can. There is no evidence or research results to suggest otherwise.
This month (on 2 May), Sharkey reiterated that autonomous weapons, those that once activated would select targets and attack them without further human intervention, could not be used in a way that could be guaranteed to predictably comply with International Law.
Not all scientists agree, though.
Ronald Arkin, Regents’ Professor, Director of the Mobile Robot Laboratory, and associate dean for research in the College of Computing at the Georgia Institute of Technology, said in a July 2013 paper that there is reason to believe in the future that robotic systems will be able to pierce the fog of war more effectively than humans ever could.
Unmanned robotic systems, he said, can be designed without emotions that cloud their judgment or result in anger and frustration (which humans are prone to) with ongoing battlefield events.
He urged that humans “should not stifle research in the area or accede to the fears that Hollywood and science fiction in general foist upon us”. “By merely stating these systems cannot be created to perform properly and ethically does not make it true. If that were so, we would not have supersonic aircraft, space stations, sub-marines, self-driving cars (eg. Google Inc’s driverless car) and the like,” he said.
He concluded that “I am not opposed to the removal of lethal autonomous systems from the battlefield, if international society so deems it fit, but I think that this technology can actually foster humanitarian treatment of noncombatants if done correctly.”
Robots, one may note, do help in surgeries. They assist the elderly and act as companions. They automate tasks and ease the burden of humans. Robots such as snake bots, aerial vehicles and rovers today search for and rescue survivors from debris causes by earthquakes, especially from areas humans can’t reach.
These robots can be identified by names. Developed by the Chiba Institute of Technology and Tohoku University, Quince, for instance, is being deployed to sense chemical, biological, radiological or nuclear dangers in areas that firefighters can’t reach.
India’s Defence, Research and Development Organisation (DRDO) has its own ‘Sanjeevani’ (meaning ‘life giver’) for similar situations, the initial version of which was used to detect survivors after the Gujarat earthquake in 2002. Available as a portable pack, the robot’s probe head can be used in air, water or mud.
These robots are categorised as ‘assistive’ robots, similar to the ones which help humans in cleaning rugs (like Roomba), washing windows or minimising human accidents by doing dangerous, repetitive tasks on industrial shop-floors.
However, they look like machines while another category of ‘social’, humanoid (even androids) robots have started and behaving more like humans. Some even serve as human companions like eldercare bots and nanny bots. And there have been talks about “marrying robot companions” too. To be categorised as ‘social’, these robots needautonomy (not remote-controlled). Examples include Kismet, Leonardo, Maggie, Domo, Mertz, AIBOs, Furbies, Cog and Paros.
Honda’s ASIMO, also referred to as a humanoid robot, even celebrated its 10th birthday on October 31, 2010.
With robots gaining a more human-like appearance and becoming more social with advancements in artificial intelligence (AI) over the past 50 years, more challengeswill arise.
However, the danger of being shot by an armed robot, who may or may not be able to be able to comply with international humanitarian law, is a debate that will not goaway in a hurry.